unity3d: computer aided first aid trainer
Post on 27-Jan-2015
127 Views
Preview:
DESCRIPTION
TRANSCRIPT
Computer Aided First Aid Trainer
B.E. (CIS) PROJECT REPORT
by
Mohammad Mustafa Ahmedzai
Department of Computer and Information Systems Engineering
NED University of Engg. & Tech.,
Karachi-75270
Computer Aided First Aid Trainer
B.E. (CIS) PROJECT REPORT
Project Group:
M.Mustafa Ahmedzai CS-04
Ahmed Nasir CS-34
Muhammad Shozib CS-35
Sajjad Ahmad CS-61
BATCH: 2009-10
Project Advisor(s):
Mr. Shahab Tehzeeb
November 2 0 1 3
Department of Computer and Information Systems Engineering
NED University of Engg. & Tech.,
Karachi-75270
ABSTRACT
Whether it is an individual, a group or a company, trying to better learn and
practice survival tactics, CAFAT is the best trainer which educates a user with plenty of
real life emergency scenarios and their First Aid requirement. CAFAT can be used by
commercial and corporate firms to train the Staff to mentally and physically prepare
themselves for any unavoidable emergency situation in case of Fire Burn, Heart Attacks,
Snake Bite etc. By creating an artificial or to be more appropriate by creating virtual
reality of such happenings using graphics and animation, we tried to train a user well on
how to respond to such life threats.
Using Unity3D and 3d Studio Max we tried to create a living replica of few
selected environments Like a House, Bus Station and a Forest. KINECT on other hand
provides a more natural and realistic touch by physically interacting the player with the
virtual character. In Short CAFAT is not a conventional game played with Joysticks or
keyboard, it involves the use of both mind and body thus providing a better 3D training
experience.
We have not limited CAFAT to just natural hazards, health threats but also to Life
threats. Bank robbery, abduction, use of weapon are day to day scenarios which can
surely be easily survived if an individual is well prepared for any such happenings both
mentally and physically. CAFAT is the best way to train a person with rescue and escape
missions. It can be used by both public and governmental firms for a variety of different
purposes. Using Unity3D and 3d Studio Max we can create a living replica of any
environment/object on earth and let the character start playing with it!
ACKNOWLEDGEMENTS
We are gratified to ALLAH, the most Beneficial and Merciful, who gave us the strength and will to
overcome the obstacles faced during the development of this Project.
Achieving our goal could not be possible without the continuous help and support of our parents,
teachers and friends.
We feel exceptional warmth and gratitude in extending our appreciation for sincere and generous
guidance and patronage of the report to:
Mr. Shahab Tehzeeb (Internal)
Miss Shumaila Ashfaque (Co-Internal)
We are extremely thankful to Mr. Shahab Tehzeeb for his help in resolving many of our issues
throughout the project.
- 1 -
TABLE OF CONTENT
CHAPTER 1 -------------------------------------------------------------------------------------------------- - 5 -
INTRODUCTION -------------------------------------------------------------------------------------------- - 5 -
1.1 DOCUMENT PURPOSE -------------------------------------------------------------------------------------------- - 5 - 1.2 PROBLEM DEFINITION -------------------------------------------------------------------------------------------- - 6 - 1.3 PROJECT OBJECTIVES---------------------------------------------------------------------------------------------- - 6 -
CHAPTER 2 -------------------------------------------------------------------------------------------------- - 8 -
INTRODUCTION TO CAFAT ------------------------------------------------------------------------------ - 8 -
2.1 DESCRIPTION ------------------------------------------------------------------------------------------------------ - 8 - 2.1.1 SCENARIOS ------------------------------------------------------------------------------------------------------ - 8 - 2.1.1.1 HEART ATTACK ----------------------------------------------------------------------------------------------- - 8 - 2.1.1.1.1 BACKGROUND ---------------------------------------------------------------------------------------------- - 8 - 2.1.1.1.2 CHALLENGES ----------------------------------------------------------------------------------------------- - 9 - 2.1.1.2 FIRE IN THE HOUSE ------------------------------------------------------------------------------------------- - 9 - 2.1.1.2.1 BACKGROUND ---------------------------------------------------------------------------------------------- - 9 - 2.1.1.2.2 CHALLENGES ----------------------------------------------------------------------------------------------- - 9 - 2.1.1.3 SNAKE BITE ------------------------------------------------------------------------------------------------- - 10 - 2.1.1.3.1 BACKGROUND -------------------------------------------------------------------------------------------- - 10 - 2.1.1.3.2 CHALLENGES --------------------------------------------------------------------------------------------- - 10 - 2.1.1.4 HOW TO PLAY ---------------------------------------------------------------------------------------------- - 10 - 2.2 SCOPE ------------------------------------------------------------------------------------------------------------ - 10 - 2.3 OPERATING ENVIRONMENT ------------------------------------------------------------------------------------ - 11 -
CHAPTER 3 ------------------------------------------------------------------------------------------------ - 12 -
PROJECT DESIGN ---------------------------------------------------------------------------------------- - 12 -
3.1 DESIGN FLOW --------------------------------------------------------------------------------------------------- - 12 - 3.1.1 APPROACH DESIGN FLOW------------------------------------------------------------------------------------ - 12 - 3.1.2 IMPLEMENTATION DESIGN FLOW --------------------------------------------------------------------------- - 13 - 3.2 USE CASES ------------------------------------------------------------------------------------------------------- - 14 - 3.3 SEQUENCE DIAGRAMS ------------------------------------------------------------------------------------------ - 16 -
CHAPTER 4 ------------------------------------------------------------------------------------------------ - 18 -
INTRODUCTION TO KINECT --------------------------------------------------------------------------- - 18 -
4.1 THE HARDWARE ------------------------------------------------------------------------------------------------- - 18 - 4.1.1 3D DEPTH SENSORS – WHAT KINECT SEES: ---------------------------------------------------------------- - 18 - 4.1.1.1 MORE DESCRIPTION: -------------------------------------------------------------------------------------- - 19 - 4.1.2 MULTI-ARRAY MIC – WHAT KINECT HEARS: --------------------------------------------------------------- - 20 - 4.1.3 STRONG INPUTS: ---------------------------------------------------------------------------------------------- - 22 - 4.2 KINECT SDK ----------------------------------------------------------------------------------------------------- - 23 - 4.2.1 ENGAGEMENT MODEL ENHANCEMENTS ------------------------------------------------------------------- - 23 - 4.2.2 APIS, SAMPLES, AND DLL DETAILS -------------------------------------------------------------------------- - 23 - 4.2.3 WINDOWS 8 SUPPORT --------------------------------------------------------------------------------------- - 24 -
- 2 -
4.2.4 VISUAL STUDIO 2012 SUPPORT ----------------------------------------------------------------------------- - 24 - 4.2.5 ACCELEROMETER DATA APIS -------------------------------------------------------------------------------- - 24 - 4.2.6 EXTENDED DEPTH DATA IS NOW AVAILABLE --------------------------------------------------------------- - 24 - 4.2.7 COLOR CAMERA SETTING APIS ------------------------------------------------------------------------------ - 24 - 4.2.8 MORE CONTROL OVER DECODING -------------------------------------------------------------------------- - 24 - 4.2.9 NEW COORDINATE SPACE CONVERSION APIS ------------------------------------------------------------- - 24 - 4.3 VOICE RECOGNITION -------------------------------------------------------------------------------------------- - 25 - 4.3.1 TWO LISTENING MODES (MODELS): ------------------------------------------------------------------------ - 25 - 4.3.2 CHOOSING WORDS AND PHRASES--------------------------------------------------------------------------- - 26 - 4.3.2.1 DISTINCT SOUNDS ------------------------------------------------------------------------------------------ - 26 - 4.3.2.2 BREVITY ----------------------------------------------------------------------------------------------------- - 27 - 4.3.2.3 WORD LENGTH --------------------------------------------------------------------------------------------- - 27 - 4.3.2.4 SIMPLE VOCABULARY -------------------------------------------------------------------------------------- - 27 - 4.3.2.5 MINIMAL VOICE PROMPTS --------------------------------------------------------------------------------- - 27 - 4.3.2.6 WORD ALTERNATIVES -------------------------------------------------------------------------------------- - 27 - 4.3.2.7 USE PROMPTS ---------------------------------------------------------------------------------------------- - 28 - 4.3.2.8 ACOUSTICS -------------------------------------------------------------------------------------------------- - 28 - 4.3.2.9 USER ASSISTANCE ------------------------------------------------------------------------------------------ - 28 - 4.3.2.10 ALTERNATIVE INPUT -------------------------------------------------------------------------------------- - 28 - 4.3.2 SEE IT, SAY IT MODEL ----------------------------------------------------------------------------------------- - 29 - 4.3.3 CHOOSING RIGHT ENVIRONMENT FOR VOICE INPUTS ----------------------------------------------------- - 29 - 4.3.3.1 AMBIENT NOISE -------------------------------------------------------------------------------------------- - 29 - 4.3.3.2 SYSTEM NOISES AND CANCELLATION ---------------------------------------------------------------------- - 30 - 4.3.3.3 DISTANCE OF USERS TO THE SENSOR: --------------------------------------------------------------------- - 30 - 4.4 GESTURE --------------------------------------------------------------------------------------------------------- - 31 - 4.4.1 INNATE AND LEARNED GESTURES ---------------------------------------------------------------------------- - 31 - 4.4.1.1 INNATE GESTURES ------------------------------------------------------------------------------------------ - 31 - 4.4.1.2 LEARNED GESTURES ---------------------------------------------------------------------------------------- - 32 - 4.4.2 STATIC, DYNAMIC AND CONTINUOUS GESTURES ----------------------------------------------------------- - 32 - 4.4.2.1 STATIC GESTURES ------------------------------------------------------------------------------------------ - 32 - 4.4.2.2 DYNAMIC GESTURES --------------------------------------------------------------------------------------- - 33 - 4.4.2.3 CONTINUOUS GESTURES ----------------------------------------------------------------------------------- - 33 - 4.4.3 ACCOMPLISHING GESTURE GOALS -------------------------------------------------------------------------- - 33 - 4.5 INTERACTIONS --------------------------------------------------------------------------------------------------- - 34 - 4.5.1 DESIGN SHOULD BE FOR APPROPRIATE MIND-SET OF USERS ----------------------------------------------- - 34 - 4.5.2 DESIGN FOR VARIABILITY OF INPUT -------------------------------------------------------------------------- - 34 - 4.5.3 VARY ONE-HANDED AND TWO-HANDED GESTURES --------------------------------------------------------- - 35 - ------------------------------------------------------------------------------------------------------------------------ - 35 - 4.5.4 BE AWARE OF TECHNICAL BARRIERS ------------------------------------------------------------------------- - 35 - 4.5.4.1 TRACKING MOVEMENT ------------------------------------------------------------------------------------- - 36 - 4.5.4.2 FIELD OF VIEW ---------------------------------------------------------------------------------------------- - 36 - 4.5.4.3 TRACKING RELIABILITY ------------------------------------------------------------------------------------- - 36 - 4.5.4.4 TRACKING SPEED ------------------------------------------------------------------------------------------- - 36 - 4.5.5 REMEMBER YOUR AUDIENCE --------------------------------------------------------------------------------- - 36 - 4.5.5.1 PHYSICAL DIFFERENCES ------------------------------------------------------------------------------------ - 36 -
- 3 -
CHAPTER 5 ------------------------------------------------------------------------------------------------ - 37 -
ASSET CREATION ---------------------------------------------------------------------------------------- - 37 -
5.1 3D MODELING -------------------------------------------------------------------------------------------------- - 37 - 5.1.1 INTRODUCTION TO 3D STUDIO MAX ------------------------------------------------------------------------ - 38 - 5.1.1.1 FEATURES --------------------------------------------------------------------------------------------------- - 38 - 5.1.1.1.1 MAXSCRIPT ---------------------------------------------------------------------------------------------- - 38 - 5.1.1.1.2 CHARACTER STUDIO ------------------------------------------------------------------------------------- - 38 - 5.1.1.1.3 SCENE EXPLORER ---------------------------------------------------------------------------------------- - 39 - 5.1.1.1.4 TEXTURE ASSIGNMENT/EDITING ------------------------------------------------------------------------ - 39 - 5.1.1.1.5 SKELETONS AND IK – INVERSE KINEMATICS ------------------------------------------------------------ - 39 - 5.1.1.2 INDUSTRIAL USAGE ---------------------------------------------------------------------------------------- - 40 - 5.1.1.3 EDUCATIONAL USAGE -------------------------------------------------------------------------------------- - 40 - 5.2 CHARACTER ANIMATION---------------------------------------------------------------------------------------- - 40 - 5.2.1 INTRODUCTION TO AUTODESK MOTIONBUILDER ---------------------------------------------------------- - 41 - 5.2.1.1 FEATURES --------------------------------------------------------------------------------------------------- - 42 - 5.3 INTERFACE DESIGN ---------------------------------------------------------------------------------------------- - 42 - 5.3.1 INTRODUCTION TO ADOBE PHOTOSHOP -------------------------------------------------------------------- - 43 - 5.3.1.1 FILE FORMATS ---------------------------------------------------------------------------------------------- - 43 - 5.3.1.2 PHOTOSHOP PLUG-INS ------------------------------------------------------------------------------------ - 43 - 5.3.1.3 BASIC TOOLS ----------------------------------------------------------------------------------------------- - 44 - 5.3.2 INTRODUCTION TO ADOBE ILLUSTRATOR ------------------------------------------------------------------- - 44 - 5.3.2.1 FILE FORMAT ----------------------------------------------------------------------------------------------- - 44 -
CHAPTER 6 ------------------------------------------------------------------------------------------------ - 45 -
INTRODUCTION TO UNITY 3D ------------------------------------------------------------------------ - 45 -
6.1 UNITY BASICS ---------------------------------------------------------------------------------------------------- - 45 - 6.2 LEARNING THE INTERFACE -------------------------------------------------------------------------------------- - 46 - 6.3 CREATING SCENES ----------------------------------------------------------------------------------------------- - 47 - 6.3.1 CREATING A PREFAB ------------------------------------------------------------------------------------------ - 47 - 6.3.2 ADDING COMPONENT & SCRIPTS --------------------------------------------------------------------------- - 48 - 6.3.3 PLACING GAMEOBJECTS ------------------------------------------------------------------------------------- - 48 - 6.3.4 WORKING WITH CAMERAS ----------------------------------------------------------------------------------- - 48 - 6.3.5 LIGHTS --------------------------------------------------------------------------------------------------------- - 48 - 6.4 ASSET TYPES, CREATION AND IMPORT ------------------------------------------------------------------------- - 49 - 6.5 CREATING GAMEPLAY ------------------------------------------------------------------------------------------- - 51 -
CHAPTER 7 ------------------------------------------------------------------------------------------------ - 52 -
PROJECT DEVELOPMENT ------------------------------------------------------------------------------ - 52 -
7.1 CREATING AND PREPARING MODELS -------------------------------------------------------------------------- - 52 - 7.2 CREATING ANIMATIONS ---------------------------------------------------------------------------------------- - 53 - 7.3 CREATING ENVIRONMENT -------------------------------------------------------------------------------------- - 54 - 7.3.1 TERRAIN ------------------------------------------------------------------------------------------------------- - 55 - 7.3.2 IMPORTING MODELS ------------------------------------------------------------------------------------------ - 57 - 7.3.3 PLACEMENT OF MODELS ------------------------------------------------------------------------------------- - 58 - 7.3.4 ADDING COMPONENTS --------------------------------------------------------------------------------------- - 59 -
- 4 -
7.3.4.1 CHARACTER CONTROLLER --------------------------------------------------------------------------------- - 59 - 7.3.4.2 PHYSICS ----------------------------------------------------------------------------------------------------- - 60 - 7.3.4.2.1 RIGID BODIES -------------------------------------------------------------------------------------------- - 60 - 7.3.4.2.2 COLLIDERS ------------------------------------------------------------------------------------------------ - 61 - 7.3.4.3 AUDIO------------------------------------------------------------------------------------------------------- - 62 - 7.4 SCRIPTING ------------------------------------------------------------------------------------------------------- - 62 - 7.4.1 LANGUAGES USED -------------------------------------------------------------------------------------------- - 63 - 7.4.1.1 UNITY SCRIPT ----------------------------------------------------------------------------------------------- - 63 - 7.4.1.2 C# ----------------------------------------------------------------------------------------------------------- - 63 - 7.4.2 INTEGRATING KINECT SDK WITH UNITY--------------------------------------------------------------------- - 64 - 7.4.2.1 CHALLENGE - MANAGED/UNMANAGED CODE INTEROPERABILITY ------------------------------------- - 64 - 7.4.2.2 INTERACTING CHARACTER WITH KINECT ------------------------------------------------------------------ - 64 - 7.4.2.3 DEFINING CUSTOM GESTURES ---------------------------------------------------------------------------- - 65 - 7.4.2.4 GRABBING OBJECTS ---------------------------------------------------------------------------------------- - 65 - 7.4.2.5 VOICE RECOGNITION --------------------------------------------------------------------------------------- - 65 - 7.4.2.6 CAMERA VIEW CONTROLLING ----------------------------------------------------------------------------- - 66 - 7.4.2.7 GAME LOGIC ----------------------------------------------------------------------------------------------- - 66 -
CHAPTER 8 ------------------------------------------------------------------------------------------------ - 67 -
FUTURE ENHANCEMENTS & RECOMMENDATIONS --------------------------------------------- - 67 -
CHAPTER 9 ------------------------------------------------------------------------------------------------ - 68 -
CONCLUSION --------------------------------------------------------------------------------------------- - 68 -
REFERNCES---------------------------------------------------------------------------------------------------- 69 -
GLOSSARY----------------------------------------------------------------------------------------------------- 70 -
- 5 -
CHAPTER 1
Introduction
With the accelerating use of games played on video game consoles, mobile devices
and personal computers, there is no doubt that a wide majority of people today prefer games
not just for entertainment purpose but also for academic and educational purposes. With the
creation and development of high generation gaming engines, game development is the next
biggest learning pool where every developer would wish to dive in to discover the
unbelievable power of 3D animation that is slowly transforming virtual world into the real
world we live in. By clearly understanding the potential of gaming technologies in mind, we
came forward with the idea of ―Computer Aided First Aid Trainer – (CAFAT)‖. An
advanced 3D Training experience, where a user will physically interact with the gaming
environment and will learn first Aid survival tactics as well as escape and rescue missions.
CAFAT aims at training an individual on how to react and respond to emergency scenarios
by enabling the player to exercise both his physical and mental abilities.
The project is not limited to an individual use but it can also be used by commercial
firms to train their employees on how to react in case of emergency. Using Unity3D and 3D
Studio Max we can create a living replica of any environment/object on earth and let the
character start playing with it!
1.1 Document Purpose
This report will cover all A-Z steps from requirement gathering to core development
of Computer Aided First Aid Trainer – CAFAT. This report will provide a complete
understanding of what is to be expected from the project being developed. The report aims
at providing clear understanding about the use of CAFAT and its core functionalities. The
- 6 -
report also provides complete explanation about the tools, resources and technology being
used.
1.2 Problem Definition
Remember the Man jumping off 8th floor from State life building? He was
broadcasted by country News Media for more than 20 minutes hanging outside the window
from burning building. Entire public stood as spectators without using a little common sense
to rescue the man. The public was untrained and didn‘t know on how to react in such
scenario. A piece of large stretched parachute fabric could have saved the man from
breaking his bones and dying on the spot. But people failed to think that critically and
rationally. Moreover most people are often so unaware that they don‘t know Help Line
Numbers of emergency services like EIDHI, Emergency Fire services, etc. as a result they
often waste time and fail to respond to an emergency.
Thousands die each year this way because people often fail to provide quick first
aid. Students are often not taught at school about first Aid survival techniques and necessary
precautions. People dying from snake bite, heart attack and fire are common examples of
some scenarios where quick first aid can save a life if applied properly.
We wanted to provide a real life training experience where both physical and mental
activity should be involved therefore we avoided conventional gaming tools like using
joysticks, keyboard and instead used KINECT by Microsoft for gesture sensing and audio
command processing. KINECT lets a player to interact naturally with computers by simply
gesturing and speaking. This adds both fun and interactivity to the game which combined
with the interesting scenarios we created wonders as you will discover later in this report.
1.3 Project Objectives
We aim at creating the first valuable 3D Video Trainer that will educate individuals,
- 7 -
public and corporate firms with survival tactics by providing them with real life scenarios
like rescue and escape missions and with the help of training tutorials teaching them how to
protect a victim or escape a deadly incident. Each scenario takes a player live inside the
game, giving a natural feel of being physically interacting through his body movement and
voice commands. KINECT combined with Unity3D gives eyes, ears and a brain to the
computer which is the only thing that inspired us and motivated us to present this end
product which is worth playing for people of all ages.
- 8 -
CHAPTER 2
Introduction to CAFAT
CAFAT is a ―Computer Aided First Aid Trainer‖ As the name suggests, it is a
trainer that guides user to learn survival tactics that will help him in call to action scenarios.
The user will interact with the game through KINECT sensors that will detect his skeletal
movements and reflects his bones transformation to the game character. The entire game is
built in Unity3D, the 3D models are crafted in 3D Studio Max and character animation is
done with the help of Motion Builder.
2.1 Description
We have currently created and developed three scenarios i.e. Heart Attack, Fire in
the house and Snake bite. Each scenario has some exciting challenges that a user needs to
complete. A Random challenge is offered each time a user plays the same scenario, in order
to keep the interest level high and offer a true gaming experience. A user can pause, play or
stop a level or go back to the main menu using voice commands. Character‘s movement is
synchronized with that of the real user using KINECT‘s gesture sensing capability. All a
Player requires to play the game is Windows Operating system, LCD, Speakers and
KINECT for Windows. Following is a brief introduction and description of each scenario.
2.1.1 Scenarios
There are three scenarios and following are the details of their Background and
Challenges faced by a player.
2.1.1.1 Heart attack
2.1.1.1.1 Background
While walking in downtown the player sees a person standing at a bus stop, who
faces a sudden heart attack. He has three minutes to provide the victim with valuable first
- 9 -
aid that could help him recover the heart attack. Failure to fulfill any one of the challenges
will result in the failure of rescue attempt and he will be taken back to the Restart Menu.
2.1.1.1.2 Challenges
The player is provided a single hint each time he loads a scenario from a total three
hints. The hints are:
1. Aspirin Tablet - Providing an Aspirin Tablet to the victim in order to help stop the blood
clotting. He can get this medicine from the nearest possible medical store.
2. CPR - He can pump his heart calmly and help slow down the heart beats.
3. Ambulance - He can simply call an ambulance for help by dialing the correct help line
number.
2.1.1.2 Fire in the House
2.1.1.2.1 Background
The player is trapped inside burning house. The house is a two story bungalow with
a kitchen on ground floor that has caught fire. The player is on first floor and he has to either
fight the fire or escape the fire. The fire eats up the house slowly at a fixed burning rate. The
decision of whether to fight the fire using an extinguisher or escape the fire is solely based
upon the percentage of house under fire. The player needs to use his common sense in order
to choose the best possible technique applicable under such emergency situation.
2.1.1.2.2 Challenges
Based upon the fixed burning rate of the fire, the player needs to meet at least one of
the following challenges in order to qualify for the next round:
1. Alarm Escape – Break the window and ring the alarm.
2. Fire emergency service – Calling fire Brigade by dialing the correct help line number
3. Fire extinguisher – Or He can fight the fire himself using an extinguisher
- 10 -
2.1.1.3 Snake Bite
2.1.1.3.1 Background
The environment here is a forest with a medieval house. The player finds his friend
being bitten by a poisonous snake. The victim fells to the ground and the player needs to
make sure he must do something in order to save his friends life within the allotted time.
Failure to fulfill any one of the challenges will result in the failure of rescue attempt and he
will be taken back to the Restart Menu.
2.1.1.3.2 Challenges
The options here are:
1. The player needs to bring an anti-venom from the house
2. He can choose to use a rope instead and tie the victim‘s leg with it in order to stop the
poison circulation to the upper part of the body.
2.1.1.4 How to Play
To provide a comprehensive guide of how the game is played, how to control
character gestures and skeletal movements and how to send voice commands correctly, a
video tutorial is added to Main Menu. The video Guide gives a clear demonstration about
the Emergency scenarios and the methods to properly interact with the virtual character
using KINECT.
2.2 Scope
Whether it is an individual, a group or a company, trying to better learn and practice
survival tactics, CAFAT is the best trainer which educates a user with real life emergency
scenarios and their First Aid requirement. CAFAT can be used by commercial and corporate
firms to train the Staff to mentally and physically prepare themselves for any unavoidable
emergency situation in case of Fire Burn, Leakage of Gas, robbery, gun shots and what not!
- 11 -
We have not limited CAFAT to just natural hazards, health threats but also to Life threats.
Bank robbery, abduction, use of weapon are day to day scenarios which can surely be easily
survived if an individual is well prepared for any such happenings both mentally and
physical. By creating an artificial or to be more appropriate by creating virtual reality of
such happenings using graphics and animation, a user can be trained well on how to respond
to such life threats. CAFAT is the best way to train a person with rescue and escape
missions. It can be used by both public and governmental firms for a variety of different
purposes.
Using Unity3D and 3d Studio Max we can create a living replica of any
environment/object on earth and let the character start playing with it!
2.3 Operating Environment
Following are the list of requirements:
1. Kinect for Windows
2. Windows 7 64-bit or later
3. 128 MB graphics card
- 12 -
1. Probating of idea
2. Identify Criteria and Constraints
3. Brainstorm Scenarios
4. Selection of Technologies
5. Writing Storyline
6. Explore Possibilities
7. Selection of Implementation
Approach
8. Go-Through Literature and
Tutorials
9. Generating Example
Prototype
10. Actual Implementation
CHAPTER 3
Project Design
3.1 Design Flow
3.1.1 Approach Design Flow
Design Flow 1: Approach
- 13 -
1. Assets Collection
2. Build 3D Terrain
3. 3D Character
Design
4. Import Character
to Unity 3D
5. Create Animations
6. Import Animation
to Unity
7. Scripting
8. Integrate Kinect with
Unity
9. Test Scenario
10. Debugging
and Finalizing
3.1.2 Implementation Design Flow
Design Flow 2: Implementation
- 14 -
3.2 Use Cases
Aspirin
CPR
Call Ambulane
Pause Game
Heart Attack
Time Out
Success Level
Fail Level
Use Case 1 : Heart Attack Scenario
Alarm & Escape
Extinguish Fire
Call Fire Emergency
Service
Pause Game
Fire In the House
Time Out
Success Level
Fail Level
Use Case 2: Fire in the House Scenario
- 15 -
Anti-Venom
Tie Rope
Pause Game
Sanke Bite
Time Out
Success Level
Fail Level
Use Case 3: Snake Bite Scenario
Heart Attack
Fire In the House
Snake Bite
How to Play
Main Menu
Quit Game
Use Case 4: Main Menu
- 16 -
3.3 Sequence Diagrams
Kinect Player Main Menu Heart Attack Fire in the House Snake Bite
Speak: Heart Attack
How to Play
Speak: Fire in the House
Speak: Fire in the House
Quit Game
Speak: How to Play
Speak: Quit Game
Sequence Diagram 1: Main Menu
Sequence Diagram 2: Heart Attack Scenario
- 17 -
Sequence Diagram 3: Fire in the House Scenario
Kinect Player Scenario : Snake Bite Anti-Venom Tie Rope
Use Anti Venom
Fail Scenario
Tie Rope
Pause
Time Out
Time Out
Speak: Pause Game
Speak: Pause Game
Speak: Resume/ Restart
Speak: Resume/ Restart
With in time
Success Scenaro
With in time
Main Menu
Sequence Diagram 4: Snake Bite Scenario
- 18 -
CHAPTER 4
Introduction to Kinect
Kinect for Windows is basically a gesture recognizing and voice controlling device.
Kinect for Windows gives computers eyes, ears, and a brain. With Kinect for Windows,
businesses and developers are creating applications that allow their customers to interact
naturally with computers by simply gesturing and speaking. It is also widely being used for
security purposes as well. One of the finest features Kinect possesses is its intelligence; how
it can detect multiple human bodies and minimizes the risk of inter-colliding them in terms
of detection. Following are the key features and hardware description of Kinect for
Windows.
4.1 The Hardware
4.1.1 3D Depth Sensors – What Kinect Sees:
Kinect for Windows is versatile, and can see people holistically, not just smaller
hand gestures. Six people can be tracked, including two whole skeletons. The sensor has an
RGB (red-green-blue) camera for color video, and an infrared emitter and camera that
measure depth. The measurements for depth are returned in millimeters. The Kinect for
Windows sensor enables a wide variety of interactions, but any sensor has ―sweet spots‖ and
limitations. With this in mind, we defined its focus and limits as follows: Physical limits –
Kinect 1: Hardware
- 19 -
The actual capabilities of the sensor and what it can see. Sweet spots – Areas where people
experience optimal interactions, given that they‘ll often have a large range of movement and
need to be tracked with their arms or legs extended. The diagrams blow illustrates the depth
and vision ranges and boundaries of Kinect for Windows.
4.1.1.1 More Description:
Kinect for Windows can track up to six people within its view, including two whole
skeletons.
Kinect for Windows can track skeletons in default full skeleton mode with 20 joints.
Kinect for Windows can also track seated skeletons with only the upper 10 joints.
Kinect 2: Depth Sensor Ranges
- 20 -
4.1.2 Multi-Array Mic – What Kinect Hears:
Kinect for Windows is unique because its single sensor captures both voice and
gesture, from face tracking and small movements to whole-body. The sensor has four
microphones that enable your application to respond to verbal input, in addition to
responding to movement.
Kinect 3: Skeleton Detection
Kinect 4: Audio Sensing
- 21 -
The Kinect for Windows sensor detects audio input from + and – 50 degrees in front
of the sensor.
1. The microphone array can be pointed at 10-degree increments within the 100-degree
range. This can be used to be specific about the direction of important sounds, such as a
person speaking, but it will not completely remove other ambient noise.
2. The microphone array can cancel 20dB (decibels) of ambient noise, which improves
audio fidelity. That‘s about the sound level of a whisper. (Kinect for Windows supports
monophonic sound cancellation, but not stereophonic.)
3. Sound coming from behind the sensor gets an additional 6dB suppression based on the
design of the microphone housing.
You can also programmatically direct the microphone array – for example,
toward a set location, or following a skeleton as it‘s tracked. By default, Kinect for
Windows tracks the loudest audio input.
Kinect 5: Microphones
- 22 -
4.1.3 Strong Inputs:
In order to provide a good experience and not frustrate users, a strong voice and
gesture interaction design should fulfill a number of requirements. To start with, it should be
natural, with an appropriate and smooth learning curve for users. A slightly higher learning
curve, with richer functionality, may be appropriate for expert users who will use the
application frequently (for example, in an office setting for daily tasks).
A Strong Voice and Gesture Interaction should hold following points:
1. Considerate of user expectations from their use of other common input mechanisms
(touch, keyboard, mouse)
2. Ergonomically comfortable
3. Low in interactional cost for infrequent or large numbers of users (for example, a kiosk
in a public place)
4. Integrated, easily understandable, user education for any new interaction.
5. Precise, reliable, and fast
6. Considerate of sociological factors (People should feel comfortable)
Kinect 6: Input Recognition
- 23 -
Diagram Description:
1. Intuitive, with easy ―mental mapping.‖
2. Easy to back out of if mistakenly started, rather than users having to complete the action
before undoing or canceling.
3. Efficient at a variety of distance ranges.
4. Appropriate amount and type of content should be displayed, it is already smart enough
to handle many but for user convince, apply moderate amount.
4.2 Kinect SDK
The Kinect for Windows SDK provides the tools and APIs, both native and
managed, that you need to develop Kinect-enabled applications for Microsoft Windows.
Manufacturers have built a new Interactions framework which provides pre-
packaged, reusable components that allow for even more exciting interaction possibilities.
These components are supplied in both native and managed packages for maximum
flexibility, and are also provided as a set of WPF controls.
4.2.1 Engagement Model Enhancements
The Engagement model determines which user is currently interacting with the
Kinect-enabled application.
This has been greatly enhanced to provide more natural interaction when a user
starts interacting with the application, and particularly when the sensor detects multiple
people. Developers can also now override the supplied engagement model as desired.
4.2.2 APIs, Samples, and DLL Details
A set of WPF interactive controls are provided to make it easy to incorporate these
interactions into your applications.
- 24 -
4.2.3 Windows 8 Support
Using the Kinect for Windows SDK, you can develop a Kinect for Windows
application for a desktop application in Windows 8.
4.2.4 Visual Studio 2012 Support
The SDK supports development with Visual Studio 2012, including the new .NET
Framework 4.5.
4.2.5 Accelerometer Data APIs
Data from the sensor's accelerometer is now exposed in the API. This enables
detection of the sensor's orientation.
4.2.6 Extended Depth Data Is Now Available
CopyDepthImagePixelDataTo() now provides details beyond 4 meters; please note
that the quality of data degrades with distance. In addition to Extended Depth Data, usability
of the Depth Data API has been improved. (No more bit masking is required.)
4.2.7 Color Camera Setting APIs
The Color Camera Settings can now be optimized to your environment. You can
now fine-tune white balance, contrast, hue, saturation, and other settings.
4.2.8 More Control over Decoding
New RawBayer Resolutions for ColorImageFormat give you the ability to do your
own Bayer to RGB conversions on CPU or GPU.
4.2.9 New Coordinate Space Conversion APIs
There are several new APIs to convert data between coordinate spaces: color, depth,
and skeleton. There are two sets of APIs: one for converting individual pixels and the other
for converting an entire image frame.
- 25 -
4.3 Voice Recognition
Using voice in your Kinect for Windows– enabled application allows you to choose
specific words or phrases to listen for and use as triggers. Words or phrases spoken as
commands aren‘t conversational and might not seem like a natural way to interact, but when
voice input is designed and integrated well, it can make experiences feel fast and increase
your confidence in the user‘s intent.
When you use Kinect for Windows voice-recognition APIs to listen for specific
words, confidence values are returned for each word while your application is listening. You
can tune the confidence level at which you will accept that the sound matches one of your
defined commands.
Floowing are the essential which ensurepefect level of confidence:
1. Try to strike a balance between reducing false positive recognitions and making it
difficult for users to say the command clearly enough to be recognized.
2. Match the confidence level to the severity of the command. For example, ―Purchase
now‖ should probably require higher confidence than ―Previous‖ or ―Next.‖
3. It is really important to try this out in the environment where your application will be
running, to make sure it works as expected. Seemingly small changes in ambient noise
can make a big difference in reliability.
4.3.1 Two Listening Modes (Models):
There are two main listening models for using voice with Kinect for Windows:
using a keyword or trigger, and ―active listening.‖
1. The sensor only listens for a single keyword. When it hears that word, it listens for
additional specified words or phrases. This is the best way to reduce false activations.
The keyword you choose should be very distinct so that it isn‘t easily misinterpreted.
- 26 -
For example, on Xbox360, ―Xbox‖ is the keyword. Not many words sound like ―Xbox,‖
so it‘s a well-chosen keyword.
2. The sensor is always listening for all of your defined words or phrases. This works fine
if you have a very small number of distinct words or phrases – but the more you have,
the more likely it is that you‘ll have false activations. This also depends on how much
you expect the user to be speaking while the application is running, which will most
likely depend on the specific environment and scenario.
Here is the diagrammatic illustration of both models:
4.3.2 Choosing Words and Phrases
4.3.2.1 Distinct sounds
Avoid alliteration, words that rhyme, common syllable lengths, common vowel
sounds, and using the same words in different phrases.
Kinect 7: Listening Modes
Kinect 8: Distinct Sounds
- 27 -
4.3.2.2 Brevity
Keep phrases short (1-5 words).
4.3.2.3 Word length
Be wary of one-syllable keywords, because they‘re more likely to overlap with
others.
4.3.2.4 Simple vocabulary
Use common words where possible for a more natural feeling experience and for
easier memorization.
4.3.2.5 Minimal voice prompts
Keep the number of phrases or words per screen small (3-6).
4.3.2.6 Word alternatives
User prompts if you have even more items that need to be voice-accessible, or for
non-text based content, consider using numbers to map to choices on a screen, as in this
example.
Kinect 9: Brevity
Kinect 10: Word Length
Kinect 11: Word Alternatives
- 28 -
4.3.2.7 Use Prompts
For commands recognized with low confidence, help course correct by providing
prompts – for example, ―Did you mean ‗camera‘?‖
4.3.2.8 Acoustics
Test your words and phrases in an acoustic environment similar to where you intend
your application to be used.
4.3.2.9 User assistance
Display keywords onscreen, or take users through a beginning tutorial.
4.3.2.10 Alternative input
Voice shouldn‘t be the only method by which a user can interact with the
application. Build in allowances for the person to use another input method in case voice
isn‘t working or becomes unreliable.
Kinect 12: Acoustics
Kinect 13: Interaction Input
- 29 -
4.3.2 See it, say it model
The ―see it, say it‖ model is one where the available phrases are defined by the text
on the screen. This means that a user could potentially read any UI text and have it trigger a
reaction. A variation of this is to have a specified text differentiator, such as size, underline,
or a symbol, that indicates that the word can be used as a spoken command. If you do that,
you should use iconography or a tutorial in the beginning of the experience to inform the
user that the option is available, and teach them what it means. Either way, there should be a
clear, visual separation between actionable text on a screen and static text.
4.3.3 Choosing Right Environment for Voice Inputs
There are a few environmental considerations that will have a significant effect on
whether or not you can successfully use voice in your application.
4.3.3.1 Ambient noise
The sensor focuses on the loudest sound source and attempts to cancel out other
ambient noise (up to around 20dB). This means that if there‘s other conversation in the
room (usually around 60-65dB), the accuracy of your speech recognition is reduced.
Amplify that to the sound level of a mall or cafeteria and you can imagine how
much harder it is to recognize even simple commands in such an environment. At some
level, ambient noise is unavoidable, but if your application will run in a loud environment,
voice might not be the best interaction choice. Ideally, you should only use voice if:
Kinect 14: See it, Say it model
- 30 -
1. The environment is quiet and relatively
2. There won‘t be multiple people speaking closed off at once
4.3.3.2 System noises and cancellation
Although the sensor is capable of more complex noise cancellation if you want to
build that support, the built-in functionality only cancels out monophonic sounds, such as a
system beep, but not stereophonic. This means that even if you know that your application
will be playing a specific song, or that the song will be playing in the room, Kinect for
Windows cannot cancel it out, but if you‘re using monophonic beeps to communicate
something to your user, those can be cancelled.
4.3.3.3 Distance of users to the sensor:
When users are extremely close to the sensor, the sound level of their voice is high.
However, as they move away, the level quickly drops off and becomes hard for the sensor to
hear, which could result in unreliable recognition or require users to speak significantly
louder. Ambient noise also plays a role in making it harder for the sensor to hear someone as
they get farther away. You might have to make adjustments to find a ―sweet spot‖ for your
given environment and setup, where a voice of normal volume can be picked up reliably. In
an environment with low ambient noise and soft PC sounds, a user should be able to
comfortably speak at normal to low voice levels (49-55dB) at both near and far distances.
Kinect 15: Ambiguity in Multiple Voices
- 31 -
4.4 Gesture
This section covers any form of movement that can be used as an input or
interaction to control or influence an application. Gestures can take many forms, from
simply using your hand to target something on the screen, to specific, learned patterns of
movement, to long stretches of continuous movement using the whole body.
Gesture is an exciting input method to explore, but it also presents some intriguing
challenges. Following are a few examples of commonly used gesture types.
4.4.1 Innate and learned gestures
You can design for innate gestures that people might be familiar with, as well as
ones they‘ll need to learn and memorize.
4.4.1.1 Innate gestures
Gestures that the user intuitively knows or that make sense, based on the person‘s
understanding of the world, including any skills or training they might have.
Examples:
1. Pointing to aim
2. Grabbing to pick up
3. Pushing to select
Kinect 16: Distance Range
- 32 -
4.4.1.2 Learned gestures
Gestures you must teach the user before they can interact with Kinect for Windows.
Examples:
1. Waving to engage
2. Making a specific pose to cancel an action Innate and learned gestures
4.4.2 Static, Dynamic and Continuous gestures
Whether users know a given gesture by heart or not, the gestures you design for your
Kinect for Windows application can range from a single pose to a more prolonged motion.
4.4.2.1 Static Gestures
A pose or posture that the user must match and that the application recognizes as
meaningful.
Kinect 17: Innate Gestures
Kinect 18: Learned Gestures
- 33 -
4.4.2.2 Dynamic Gestures
A defined movement that allows the user to directly manipulate an object or control
and receive continuous feedback.
4.4.2.3 Continuous Gestures
Prolonged tracking of movement where no specific pose is recognized but the
movement is used to interact with the application.
4.4.3 Accomplishing Gesture Goals
The users‘ goal is to accomplish their tasks efficiently, easily, and naturally. Your
goal is to enable them to fulfill theirs. Users should agree with these statements as they use
gesture in your application:
1. I quickly learned all the basic gestures.
Kinect 19: Static Gestures
Kinect 20: Dynamic Gestures
Kinect 21: Continuous Gestures
- 34 -
2. Now that I learned a gesture, I can quickly and accurately perform it.
3. When I gesture, I‘m ergonomically comfortable.
4. When I gesture, the application is responsive and provides both immediate and ongoing
feedback.
4.5 Interactions
4.5.1 Design should be for appropriate mind-set of users
Challenge is fun! If a user is in game mindset and can‘t perform a gesture, then it‘s a
challenge to master it and do better next time. UI mindset Challenge is frustrating. If a user
is in UI mindset and can‘t perform a gesture, he or she will be frustrated and have low
tolerance for any learning curve. In game mindset, a silly gesture can be fun or entertaining.
In UI mindset, a silly gesture is awkward or unprofessional.
4.5.2 Design for variability of input
Logical gestures have meaning and they relate to associated UI tasks or actions. The
feedback should relate to the user‘s physical movement.
Simply ―asking users to wave‖ doesn‘t guarantee the same motion. They might
wave:
1. From their wrist
2. From their elbow
Kinect 22: Interaction Designing
- 35 -
3. With their whole arm
4. With an open hand moving from left to right
5. By moving their fingers up and down together
4.5.3 Vary one-handed and two-handed gestures
Use one-handed gestures for all critical-path tasks. They‘re efficient and accessible,
and easier than two-handed gestures to discover, learn, and remember.
Use two-handed gestures for noncritical tasks (for example, zooming) or for
advanced users. Two-handed gestures should be symmetrical because they‘re then easier to
perform and remember.
4.5.4 Be aware of technical barriers
If you‘re using skeleton data to define your gestures, you‘ll have greater flexibility,
Kinect 23: Variability in Gesture Designing
Kinect 24: One hand Gesture Kinect 25: Two hand Gesture
- 36 -
but some limitations as well.
4.5.4.1 Tracking movement
Keeping arms and hands to the side of the body when performing gestures makes
them easier to track, whereas hand movements in front of the body can be unreliable.
4.5.4.2 Field of view
Make sure the sensor tilt and location, and your gesture design, avoid situations
where the sensor can‘t see parts of a gesture, such as users extending a hand above their
head.
4.5.4.3 Tracking reliability
Skeleton tracking is most stable when the user faces the sensor.
4.5.4.4 Tracking speed
For very fast gestures, consider skeleton tracking speed and frames-per-second
limitations. The fastest that Kinect for Windows can track is at 30 frames per second.
4.5.5 Remember your audience
Regardless of how you define your gestures, keep your target audience in mind so
that the gestures work for the height ranges and physical and cognitive abilities of your
users. Think about the whole distance range that your users can be in, angles that people
might pose at, and various height ranges that you want to support. Conduct frequent
usability tests and be sure to test across the full range of intended user types.
4.5.5.1 Physical differences
For example, you should account for users of various heights and limb lengths.
Young people also make, for example, very different movements than adults when
performing the same action, due to differences in their dexterity and control.
- 37 -
CHAPTER 5
Asset Creation
Environment is considered to be nuts and bolts of an application in order to gain
user‘s confidence and make user explore more. An environment‘s most essential entities are
assets one builds to replicate the scenario and integrate it to reality by using better graphics.
With the aim of creating effective 3D environments and provide application, a better
graphical look, following are the tools used to build Assets of CAFAT:
1. For 3D Modeling: 3D Studio Max is used
2. Character Animation is done via using Autodesk MotionBuilder
3. Interface Design: Adobe Photoshop & Adobe Illustrator
Major assets which make scenarios existent and actual are bus stop, house, fire
extinguisher, Vinscent (main character), forest, buildings, cars, animals and more.
All of the assets that are mentioned above are build using four main tools mention
above as well.
Below, is the brief introduction of the tools used for asset creation:
5.1 3D Modeling
The process of creating a 3D representation of any surface or object by
manipulating polygons, edges, and vertices in simulated 3D space. 3D modeling can be
achieved manually with specialized 3D production software that lets an artist create
and deform polygonal surfaces, or by scanning real-world objects into a set of data
points that can be used to represent the object digitally.
3D Modeling is used in a wide range of fields, including engineering,
entertainment design, film, visual effects, game development, and commercial advertising.
To design the major assets for CAFAT, like Roads, animals, house and bungalow,
- 38 -
and Human characters, 3D Studio Max is used.
5.1.1 Introduction to 3D Studio Max
3ds Max software provides a comprehensive 3D modeling, animation, rendering,
and compositing solution for games, film, and motion graphics artists. 3ds Max 2014 has
new tools for crowd generation, particle animation, and perspective matching.
Autodesk 3ds Max, formerly 3D Studio Max, is a 3D computer graphics program
for making 3D animations, models, and images. It was developed and produced by
Autodesk Media and Entertainment. It has modeling capabilities, a flexible plugin
architecture and can be used on the Microsoft Windows platform. It is frequently used by
video game developers, many TV commercial studios and architectural visualization
studios. It is also used for movie effects and movie pre-visualization.
In addition to its modeling and animation tools, the latest version of 3ds Max also
features shades (such as ambient occlusion and subsurface scattering), dynamic simulation,
particle systems, radiosity, normal map creation and rendering, global illumination, a
customizable user interface, and its own scripting language.
5.1.1.1 Features
5.1.1.1.1 MAXScript
MAXScript is a built-in scripting language that can be used to automate
repetitive tasks, combine existing functionality in new ways, develop new tools and
user interfaces, and much more. Plugin modules can be created entirely within
MAXScript.
5.1.1.1.2 Character Studio
Character Studio was a plugin which since version 4 of Max is now
integrated in 3D Studio Max, helping users to animate virtual characters. The system
- 39 -
works using a character rig or "Biped" skeleton which has stock settings that can be
modified and customized to the fit character meshes and animation needs. This tool
also includes robust editing tools for IK/FK switching, Pose manipulation, Layers
and Key framing workflows,
5.1.1.1.3 Scene Explorer
Scene Explorer, a tool that provides a hierarchical view of scene data and
analysis, facilitates working with more complex scenes. Scene Explorer has the
ability to sort, filter, and search a scene by any object type or property (including
metadata). Added in 3ds Max 2008, it was the first component to facilitate .NET
managed code in 3ds Max outside of MAXScript.
5.1.1.1.4 Texture assignment/editing
3ds Max offers operations for creative texture and planar mapping, including
tiling, mirroring, decals, angle, rotate, blur, UV stretching, and relaxation; Remove
Distortion; Preserve UV; and UV template image export. The texture workflow
includes the ability to combine an unlimited number of textures, a material/map
browser with support for drag-and-drop assignment, and hierarchies with
thumbnails.
5.1.1.1.5 Skeletons and IK – Inverse Kinematics
Characters can be rigged with custom skeletons using 3ds Max bones, IK
solvers, and rigging tools powered by Motion Capture Data.
All animation tools — including expressions, scripts, list controllers, and
wiring — can be used along with a set of utilities specific to bones to build rigs of
any structure and with custom controls, so animators see only the UI necessary to
- 40 -
get their characters animated.
5.1.1.2 Industrial Usage
Many recent films have made use of 3ds Max, or previous versions of the program
under previous names, in CGI animation, such as Avatar and 2012, which contain computer
generated graphics from 3ds Max alongside live-action acting.
3ds Max has also been used in the development of 3D computer graphics for a
number of video games.
Architectural and engineering design firms use 3ds Max for developing concept art
and previsualization.
5.1.1.3 Educational Usage
Educational programs at secondary and tertiary level use 3ds Max in their
courses on 3D computer graphics and computer animation. Students in the FIRST
competition for 3d animation are known to use 3ds Max.
5.2 Character Animation
Character animation is more or less a generic word that includes all the objects
which holds animation.
Listed below are the some main animations of CAFAT, they are listed according to
scenarios and situations:
1. Fire in the house
Glass break
Fire extinguisher
Jump from balcony
Fire blew in kitchen
Cars on road
- 41 -
2. Snake Bite
Snake biting human
Human fall-to-sit on ground
Drop Axe
3. Heart Attack
Patient faint and fall
CPR applying
Patient writhes
Apply
Buss passing by Bus stop
Cars on road
People walking on foot-path
All the mentioned above animations are build using Autodesk
MotionBuilder. A brief introduction to the tool is presented below:
5.2.1 Introduction to Autodesk MotionBuilder
Autodesk MotionBuilder is 3D character animation software for virtual
production enables you to more efficiently manipulate and refine data with greater
reliability. Capture, edit, and play back complex character animation in a highly
responsive, interactive environment, and work with a display optimized for both
animators and directors.
MotionBuilder is professional 3D character animation software. It is used for virtual
production, motion capture, and traditional keyframe animation. MotionBuilder is produced
by Autodesk. It was originally named Filmbox when it was first created by Canadian
company Kaydara – later acquired by Autodesk and renamed to MotionBuilder.
- 42 -
It is primarily used in film, game, television production, as well as other multimedia
projects. MotionBuilder is widely used, for example in mainstream products like Assassin's
Creed, Killzone 2, and Avatar.
5.2.1.1 Features
1. Real-time display and animation tools
2. Facial and skeletal animation
3. A software development kit which exposes functionality through Python and
C++
4. Native FBX support which allows interoperability between it and, for example,
Maya and 3d Studio Max
5. Ragdoll Physics
6. Inverse Kinematics
7. 3D non-linear editing system (the Story tool)
8. Professional video broadcast output
9. Direct connection to other digital content creation tools
10. The Autodesk FBX file format (.fbx extension) for 3D-application data exchange
has grown out of this package.
5.3 Interface Design
Interfaces and some screens of CAFAT are designed using Adobe Photoshop and
Adobe Illustrator. These tools are widely used and possess remarkable acceptance around
the world. Following are the screens which are designed using these tools:
1. Main Menu Screen
2. Pause Screen
- 43 -
3. Loading Screens
4. How to Play
5. Tips and Hints
6. Cell Phone
Here is a brief introduction to Adobe Photoshop and Adobe Illustrator:
5.3.1 Introduction to Adobe Photoshop
Adobe Photoshop CS6 is the 13th major release of Adobe Photoshop. The CS
rebranding also resulted in Adobe offering numerous software packages containing
multiple Adobe programs for a reduced price. Adobe Photoshop is released in two
editions: Adobe Photoshop, and Adobe Photoshop Extended, with the Extended
having extra 3D image creation, motion graphics editing, and advanced image analysis
features.
5.3.1.1 File Formats
Photoshop files have default file extension as .PSD, which stands for "Photoshop
Document." A PSD file stores an image with support for most imaging options available in
Photoshop. These include layers with masks, transparency, text, alpha channels and spot
colors, clipping paths, and duotone settings. This is in contrast to many other file formats
(e.g. .JPG or .GIF) that restrict content to provide streamlined, predictable functionality. A
PSD file has a maximum height and width of 30,000 pixels, and a length limit of 3
Gigabytes.
5.3.1.2 Photoshop Plug-ins
Photoshop functionality can be extended by add-on programs called Photoshop
plugins (or plug-ins). Adobe creates some plugins, such as Adobe Camera Raw, but third-
party companies develop most plugins, according to Adobe's specifications. Some are free
- 44 -
and some are commercial software. Most plugins work with only Photoshop or Photoshop-
compatible hosts, but a few can also be run as standalone applications.
1. Color correction plugins (Alien Skin Software, Nik Software, OnOne Software, Topaz
Labs Software, The Plugin Site, etc.)
2. Special effects plugins (Alien Skin Software, Auto FX Software, AV Bros.,
Flaming
Pear Software, etc.)
3. 3D effects plugins (Andromeda Software, Strata,etc.)
5.3.1.3 Basic Tools
Upon loading Photoshop, a sidebar with a variety of tools with multiple image-
editing functions appears to the left of the screen. These tools typically fall under the
categories of drawing; painting; measuring and navigation; selection; typing; and
retouching. Some tools contain a small triangle in the bottom right of the toolbox icon.
These can be expanded to reveal similar tools. While newer versions of Photoshop are
updated to include new tools and features, several recurring tools that exist in most versions.
5.3.2 Introduction to Adobe Illustrator
Adobe Illustrator is a vector graphics editor developed and marketed by Adobe
Systems. The latest version, Illustrator CC, is the seventeenth generation in the
product line.
Most of the tools of AI are similar to that of Adobe Photoshop, however, it supports
vector design, 3d models re-finishing and building very high resolution images.
5.3.2.1 File Format
The file format it generates id .AI which is easily manipulate-able on Photoshop by
adding AI plug-in to it.
- 45 -
CHAPTER 6
Introduction to Unity 3D
Unity 3D is a Game development tool that made our project possible. It let us build
fully functional, professional 3D Game prototypes with realistic environments, sound,
dynamic effects and much more. It would have been quite a challenge completing this
project without Unity.
Game Engines such as Unity are the power-tools behind the games we know and
love. Unity is one of the most widely-used and best loved packages for game development
and is used by everyone from hobbyists to large studios to create games and interactive
experiences for the web, desktops, mobiles, and consoles. Unity helped us from creating 3D
worlds to scripting and creating game mechanics.
6.1 Unity Basics
Before getting started with any 3D package, it was crucial to understand the
environment we would be working in. As Unity is primarily a 3D-based development tool,
many concepts required a certain level of understanding of 3D development and game
engines. Therefore it was crucial to understand some important 3D concepts before moving
on to discuss the concepts and interface of Unity itself.
1. Coordinates and vectors
2. 3D shapes
3. Materials and textures
4. Rigidbody dynamics
5. Collision detection
6. Game Objects and Components
7. Assets and Scenes
- 46 -
8. Prefab
9. Unity editor interface
For details on these elements, we have referenced the study resources in Appendix
which can be referenced for further research on each of these 3D concepts mentioned above.
6.2 Learning the Interface
The Unity interface, like many other working environments, has a customizable
layout. Consisting of several dock able spaces, you can pick which parts of the interface
appear where. Let's take a look at a typical Unity layout:
This layout can be achieved by going to Window | Layouts | 2 by 3 in Unity. As the
previous image demonstrates, there are five different panels or views you'll be dealing with,
which are as follows:
Scene [1]—where the game is constructed.
Game [2]—the preview window, active only in play mode.
Hierarchy [3]—a list of GameObjects in the scene.
Unity 1: Interface
- 47 -
Project [4]—a list of your project's assets; acts as a library.
Inspector [5]—settings for currently selected asset/object/setting.
Complete lists of Tools that define the entire interface are:
1. The Scene view and Hierarchy
2. Control tools
3. Flythrough Scene navigation
4. Control bar
5. Search box
6. Create button
7. The Inspect
8. The Game view
9. The Project window
6.3 Creating Scenes
Scenes contain the objects of the game i.e. Fire Extinguisher and fire alarm in ―Fire
in the House‖ are objects in CAFAT. They can be used to create a main menu, individual
levels, and anything else. There is a separate scene file for every scenario. The assets are
shared among all of them. In each Scene, we placed our environments, obstacles, set up
coordinates, cameras, lightening and decorations, essentially designing and building our
game in pieces.
Following is a brief introduction of what makes up a scene.
6.3.1 Creating a Prefab
A Prefab is a type of asset -- a reusable GameObject stored in Project View. Prefabs
can be inserted into any number of scenes, multiple times per scene. When you add a Prefab
to a scene, you create an instance of it. All Prefab instances are linked to the original Prefab
- 48 -
and are essentially clones of it. No matter how many instances exist in your project, when
you make any changes to the Prefab you will see the change applied to all instances.
6.3.2 Adding Component & Scripts
Scripts are a type of Component. To add a Component, just highlight your
GameObject and select a Component from the Component menu. You will then see the
Component appear in the Inspector of the GameObject. Scripts are also contained in the
Component menu by default.
6.3.3 Placing GameObjects
Once the GameObject is in the scene, you can use the Transform Tools to position it
wherever you like. Additionally, you can use the Transform values in the Inspector to fine-
tune placement and rotation.
6.3.4 Working with Cameras
Cameras are the eyes of your game. Everything the player will see while playing is
through one or more cameras. You can position, rotate, and parent cameras just like any
other GameObject. A camera is just a GameObject with a Camera Component attached to it.
Therefore it can do anything a regular GameObject can do and then some camera-specific
functions too.
6.3.5 Lights
There are three different types of lights in Unity, and all of them behave a little
differently from each other. The important thing is that they add atmosphere and ambience
to the game. Different lighting can completely change the mood of the game as it did in
CAFAT.
1. The prototyping environment in the image below shows a floor comprised of a cube
primitive, a main camera through which to view the 3D world and a Point Light setup to
- 49 -
highlight the area where our gameplay will be introduced.
2. You start with a dull, grey background and end up with something far more realistic than
one can imagine.
6.4 Asset Types, Creation and Import
Assets are simply the models that we create to give a realistic touch to the game, it is
the atmosphere or environment with 3D dimensions. All objects that act as an asset in game
play come under this title. You can use any supported 3D modeling package to create a
rough version of an asset. Our example of CAFAT used 3D Studio Max. Following is the
list of applications that are supported by Unity
1. Maya
2. Cinema 4
3. 3D Studio Max
4. Cheetah3D
5. Modo
6. Lightwav
Unity 2: Scene View of Light
- 50 -
7. Blender
In CAFAT the Forest environment for ―Snake Bite‖ scenario, the two story building
block in ―Fire in the House‖ scenario and the down town environment for ―Heart Attack‖
scenario are all created first in 3d Studio Max and then imported to Unity.
There are a handful of basic asset types that will go into the game. The types are:
1. Mesh Files & Animations
2. Texture Files
3. Sound Files
All we have to do is create the asset in 3d Studio Max and save it somewhere in the
Assets folder. When you return to Unity or launch it, the added file(s) will be detected and
imported. Unity will automatically detect files as they are added to your Project folder's
Assets folder. When you put any asset into your Assets folder, you will see the asset appear
in your Project View.
Unity 3: Assets
- 51 -
6.5 Creating Gameplay
Unity empowers game designers to make games. What's really special about Unity
is that you don't need years of experience with code or a degree in art to make fun games.
There is a handful of basic workflow concepts needed to learn Unity. This section contained
the core concepts needed for creating unique, amazing, and fun gameplay. The majority of
these concepts required us to write Scripts in C# or JavaScript.
- 52 -
Created a Plane Selecting Vertices Dividing plane into
three mashes
Selecting the sides and extruding them outward
Creating the Texture
Applying the Texture
Exporting in FBX format
Importing in Unity
CHAPTER 7
Project Development
7.1 Creating and Preparing Models
Almost all character models are created in 3rd
party 3D modeling applications like
3d Studio Max, Blender, and Sketch-up Pro etc. There are no native tools for building
characters build-in to Unity. There are a variety of ready-made characters (typically for
purchase) at a variety of sites including the Unity Asset Store.
Flow diagrams of creating different models:
Flow Diagram 1: Road Designing
Project Development 1: Studio 3dMax Project Development 2: Sketch Pro
- 53 -
Flow Diagram 2: Scripting
7.2 Creating Animations
We have created different animations in each scenario. Following are the animations
that we have created in Motion Builder:
1. Heart Attack :
Faint Animation: When the scenario starts the patient fells to the ground.
Writhing Animation: While applying CPR the patient writhes.
2. Fire in the House:
Glass breaking Animation: When a player strikes the window five times, the glass
breaks.
Fire Extinguisher Spray Animation: When a player turns on fire extinguisher.
Fire Particles Animation: The whole fire in the house.
Jump Animation: When a player reaches to the balcony, he jumps.
3. Snake Bite:
Snake Movement Animation: When the scenario starts the snake moves and bites a
character.
Victim Sitting after Snake bites: When a snake bites the victim, he sits.
Create JS/C# scripts
Opening mono develop editor
Writing Scripts
Compiling scripts
Adding to respective
objects
- 54 -
Importing Rig Characters
Applying Character
definition
Mapping the Bones Characterization
Creating FK and IK Rigged
Moving bones and setting keys
Playing Animation and testing
Plotting animation to the character
Exporting in FBX with baked animation
Importing Exported FBX
form MB
Changing the animation to
Humanoid
Adding asset character to
scene Set Positions
Adding Camera and Character
Component
Adding Locomotion
Scripts
Animation Controller
State diagram of Animation
Adding Colliders and game logic
scripts
Flow Diagram 3: Creating Animation on Humanoid Character
Flow Diagram 3: Creating Animation on Humanoid Character
7.3 Creating Environment
Environment is one of the most essential entities that leads the game to perfection
and increases acceptability and user friendliness. Environment is considered to be the
judging element of any game for users; if it look fine to the user, he will ultimately continue
to explore it, if it‘s not, there are less chances that an ordinary user would appreciate it. It is
never considered to be an easy task to create close-to-reality environments but unity provide
with full features and options to create such environments. It requires a lot hard work and
- 55 -
concentration to develop such environments.
7.3.1 Terrain
A new Terrain can be created from GameObject->Create Other->Terrain. This
will add a Terrain to your Project and Hierarchy Views.
Your new Terrain will look like this in the Scene View:
If you would like a differently sized Terrain, choose the Options button from the
Terrain Inspector. There are a number of settings related to Terrain size which you can
change from here.
Project Development 3: Creating Terrain
Project Development 4: Terrain Scene View
- 56 -
In CAFAT we have built our terrains as follows:
1. HEART ATTACK:
In heart attack scenario we build a city that consists of different objects. This
terrain consists of a highway road, on road-side there are trees and buildings, cars, and
people walking on foot-path. Sky view is also a part of this terrain which also contains
different types of lights and shades. The place of action, where the patient faces the
heart-attack has a bus-stop and a medical store nearby which is in walking-approach
of the player.
2. FIRE IN THE HOUSE:
Fire in the house is another scenario of CAFAT. The terrain of this scenario is a
city-side. Where there is bungalow that has ground floor and a first floor. City-side also
contains other objects as well, that are roads, cars and buildings. Inside bungalow,
there is a boundary wall and parking area. Ground floor contains kitchen, living room,
and drawing room. Whereas first floor contains bead room and bathroom. There is a
connecting stair as well, that connects first floor to ground floor internally. Bedroom
has a large window that is actually break-able.
Project Development 4: Terrain Options
- 57 -
Downloading the 3d Models
Importing them in Studio Max
Downloading the Textures
Applying the Textures
Positioning the Gizmo
Exporting them with FBX format
3. SNAKE BITE:
Snake bite terrain contains more objects relatively. It is a forest-side location
where there is medieval house. Beside house, there are cattle and farm animals. Also
there are trees and grass surrounding the house. Inside house, there is a well, a shelf,
and a table. Anti-venom is placed on shelf while the rope is placed on the table. There
is a set of axe of which, the user holds one and the other one is held by a friend who is
also helping the user in cutting woods.
7.3.2 Importing models
Import a 3D model into Unity you can drag a file into the project window. In the
inspector > Model tab Unity supports importing models from most popular 3D applications.
The given block diagram shows the importing scheme for STUDIO MAX. FBX is
the supported format in which the refined model is then imported to unity.
Following are the models which we downloaded from unity asset store or
downloaded them for STUDIO MAX.
Project Development 5: Flow Diagram of Importing Asset Importing
- 58 -
1. Heart Attack
Road
Medical Store
Building
Bus Stop etc.
2. Fire in the House
Alarm
Extinguisher
House
Cars etc.
3. Snake Bite
Horses
House
Rope
Snake
Anti-Venom etc.
7.3.3 Placement of models
Use the Transform Tools in the Toolbar to Translate, Rotate, and Scale individual
Game Objects. Each has a corresponding Gizmo that appears around the selected Game
Object in the Scene View. You can use the mouse and manipulate any Gizmo axis to alter
the Transform Component of the Game Object, or you can type values directly into the
- 59 -
number fields of the Transform Component in the Inspector. Each of the three transform
modes can be selected with a hotkey - W for Translate, E for Rotate and R for Scale.
To place the models in Unity 3d we can drag the objects from Hierarchy and place it
in the Scene View. To place the objects in right place we can use transform to set the
position, rotation and scale options.
7.3.4 Adding Components
7.3.4.1 Character Controller
The Character Controller is mainly used for third-person or first-person player
control that does not make use of Rigidbody physics.
Project Development 6: Model Placement Options
Project Development 7: Transform
- 60 -
You use Character Controllers if you want to make a humanoid character. This
could be the main character in a third person plat former, FPS shooter or any enemy
characters.
These Controllers don't follow the rules of physics since it will not feel right.
Instead, a Character Controller performs collision detection to make sure your characters
can slide along walls, walk up and down stairs, etc.
Character Controllers are not affected by forces but they can push Rigidbodies by
applying forces to them from a script. Usually, all humanoid characters are implemented
using Character Controllers.
7.3.4.2 Physics
Unity has NVIDIA PhysX physics engine built-in. This allows for unique emergent
behavior and has many useful features. To put an object under physics control, simply add a
Rigidbody to it. When you do this, the object will be affected by gravity, and can collide
with other objects in the world.
7.3.4.2.1 Rigid Bodies
Rigidbodies enable GameObjects to act under the control of physics. The
Rigidbody can receive forces and torque to make your objects move in a realistic way.
Project Development 8: Character Controlling Options
- 61 -
Rigid bodies allow your Game Objects to act under control of the physics engine.
This opens the gateway to realistic collisions, varied types of joints, and other very cool
behaviors.
7.3.4.2.2 Colliders
Detecting collisions within your game is a crucial element of both realism and
functionality; currently walking around the CAFAT scenarios you'll be able to walk through
the walls which aren‘t very realistic and this is something we tried best learning more. To
get that fixed we used Colliders.
Colliders are Components built in to Unity that provide collision detection using
their various 'Bounding Boxes', the green lines shown surrounding the tree in the below
image. The Mesh Collider however doesn't have any green lines surrounding in the image
below it since it uses the Tree Mesh, outlined in blue.
When using Colliders the collision is handled for us, with the Collider calculating
which part of the Bounding Box was intercepted first and controlling its reaction to the
object(s) collided with. This data is stored and made available to us via several functions,
Project Development 9: Types of Colliders
- 62 -
allowing us to trigger more specific behavior when objects enter, occupy and leave a
bounding box should we need to.
7.3.4.3 Audio
The Audio Listener works in conjunction with Audio Sources, allowing you to
create the aural experience for your games. When the Audio Listener is attached to a Game
Objects in your scene, any Sources that are close enough to the Listener will be picked up
and output to the computer's speakers.
In CAFAT audio listeners are added with Alarm, Walking of a Character, Fire
Extinguisher Spray, Jump, Hitting and breaking the glass window and main menu etc.
7.4 Scripting
Scripting with Unity brings you fast iteration and execution and the strength and
flexibility of a world-leading programming environment. Scripting is uncluttered,
straightforward and incredibly fast. In Unity, you write simple behavior scripts in Unity
script, C# or Boo. All three languages are easy to use and run on the Open Source .NET
platform, Mono, with rapid compilation times but as far as CAFAT is concerned we chose
Project Development 10: The Audio Listener, attached to the Main Camera
- 63 -
C# as the primary programing language for our Game development.
7.4.1 Languages Used
7.4.1.1 Unity Script
Unity JavaScript is compiled (and fast, which is excellent) but not so dynamic as
JavaScript in browsers (which is interpreted).
7.4.1.2 C#
1. C# is a multi-paradigm programming language encompassing strong typing, imperative,
declarative, functional, procedural, generic, object-oriented (class-based), and
component-oriented programming disciplines
2. C# intended to be a simple, modern, general-purpose, object-oriented programming
language. Its development team is led by Anders Hejlsberg. The most recent version is
C# 5.0, which was released on August 15, 2012. We used the latest version to ensure
maximum compatibility with cross platforms.
In CAFAT a lot scripting is done from controlling the movement of cars to control
the movement of character. Below is the list of some of the most important scripts and their
description:
1. Character Movement: It controls the movement of character like walk, jump, turning
left and right.
2. Kinect Manager: It is to use to get real time skeleton and depth data from Kinect.
3. Kinect Wrapper: It is a wrapper that helps in invoking the C++ Kinect API functions
from C# using P/Invoke.
4. Speech Manager: It is use to get real time audio data from Kinect.
5. Speech Wrapper: It is a wrapper that helps in invoking the C++ SAPI and Kinect audio
functions from C# using P/Invoke.
- 64 -
6. Interaction Manager: It is use to get real time interaction (Grip, Release) data from
Kinect.
7. Interaction Wrapper: It is a wrapper that helps in invoking the C++ Kinect Interaction
API functions from C# using P/Invoke.
8. Heart Attack Logic: It contains the logic for heart attack scenario including like
success, failure of scenario.
9. Fire Logic It contains the logic for fire in the house scenario including like success,
failure of scenario.
10. Snake Logic: It contains the logic for snake bite scenario including like success, failure
of scenario.
11. Main Menu: It contains the logic for navigating a scenario with the help of voice
commands.
There are also other scripts defined in CAFAT for other functions for example
spread of fire, camera controlling etc.
7.4.2 Integrating Kinect SDK with Unity
The Kinect wrapper script is use for integrating Kinect-SDK with Unity-3d. It uses
the C++ Natural User Interface (NUI) API provided by Microsoft and map those functions
to Mono C# equivalent that can then be invoked from within Unity.
7.4.2.1 Challenge - Managed/Unmanaged Code Interoperability
It was a challenge to map native C++ functions into C#. We have used the platform
invoke P/INVOKE to overcome this challenge. It works enables calling any function in any
unmanaged language as long as its signature is redeclared in managed source code.
7.4.2.2 Interacting character with Kinect
Avatar Controller Script does the work of calculating the joints positions of Human
- 65 -
Player and converting it to appropriate co-ordinates that are then applied to Humanoid
character in real time.
7.4.2.3 Defining Custom Gestures
There are some custom gestures define that controls right/left movement, walking of
character, pick out phone, and turn on fire extinguisher etc. Description of some of the
gestures is listed below:
1. Walking – This gesture is detected when there is a difference of more than 20 cm between
the knee joint and ankle joint.
2. Right/Left Turing - This gesture is detected when there is a difference between right/ left
hand shoulder and right/left hand wrist is more than 4 40 cm.
3. Phone dialing - This gesture is detected when we place our hand on the upper pocket near
shoulder bone.
7.4.2.4 Grabbing Objects
It is a feature of Interaction manager that detects the grabbing and releasing of both the
hands. For example to grab/release the rope or fire extinguisher in scenarios is done when
the player collide the object and grip/release the object.
7.4.2.5 Voice Recognition
Voice recognition is done by using microphone arrays in Kinect device and
Microsoft Speech-SDK. There is a Grammar File defined which contains the terms to be
recognize in the specific scenario.
1. Main Menu - Navigation in the main menu is voice only that is the player has to give input
voice command in order to proceed.
2. Pause Menu – During the scenario if a player speaks out PAUSE the game halt on a current
state and displays a pause menu.
3. Phone Dialup – To dial numbers in a scenario player has to speak out numbers and say
- 66 -
CALL.
7.4.2.6 Camera View Controlling
Camera in CAFAT in controlled by the movement of head. If player move his head
up and down, the camera will move in respective position. We define script in Avatar
Controller that detects joint position of head and process input for Unity camera.
7.4.2.7 Game Logic
In all the scenarios there is logic of timer that defines whether the level fails or
succeeds. If the player succeeds to perform given the challenges with in the allocated time
than it is a scenario success and if not then it is a scenario failure.
There is logic in each scenario that defines the results of scenario. Description of
each logic is given below:
1. Heart Attack – If the player succeeds to do CPR or CALL AMBULANCE or use
ASPIRIN in the allocated time than scenario is completed.
2. Fire in the house - In this scenario a player has to ring alarm and escape or extinguish
the fire with fire extinguisher or break the glass and call fire emergency service in
predefined time space to complete the scenario with success.
3. Snake bite – A player has to use anti-venom or tie the rope on victim‘s injury with in
the given time to complete the scenario successfully.
- 67 -
CHAPTER 8
Future Enhancements & Recommendations
There is a lot we learned while developing CAFAT but with every new step forward
came a new challenge to fulfill and a lot more to practice and apply. Unity 3D has no limits
to learning and therefore in order to create perfect natural looking assets with realistic
graphics, we have a long way to go. We will try our level best to further extend the number
of scenarios, improve character interaction with Game Objects, improve audio recognition
while using KINECT, create user profiles, etc.
In order to keep developing CAFAT, we would wish the upcoming students to
extend its features and functionalities. We also expect them to make CAFAT compatible
with the upcoming new version of KINECT which is more powerful than this one. We
would love to offer our great cooperation to anyone who would help in future development
of this 3D Trainer.
We also hope that CAFAT would bring a wining trophy to NED in next year‘s
Microsoft Imagine Cup Competition.
- 68 -
CHAPTER 9
Conclusion
The entire experience from taking up the FYP project till ending up with CAFAT
was once in a lifetime experience that was the best moment of our four years undergraduate
life. The team work, task distribution, sudden meetings, growing number of challenges and
active contribution to a single Project taught us a lot about how to work under pressure and
live up to the deadlines.
CAFAT enhanced our technical skills tremendously. Learning Game Development
was an exciting experience, playing with unity 3D, Studio MAX and Motion Builder, was
no less than a roller coaster ride that enriched our imaginative powers and introduced us to a
different world we call a virtual reality.
- 69 -
REFERENCES
Websites:
http://Unitybook.net
http://Docs.unity3d.com/documentation
http://Docs.unity3d.com/tutorial
http://digitaltutors.com/software/3ds-max-tutorials
http://digitaltutors.com/software/motionbuilder-tutorials
http://microsoft.com/en.us/kinnectforwindows
http://photoshop.com/products/photoshop/what
http://helpx.adobe.com/illustrator/topics/illustrator-tutorials.html
Books and Online Documentation:
Unity 3.x Game Development Essentials by Will GoldStone
http://Msdn.micrsoft.com/en-us/library/hh855352.aspx
http://Social.msdn.microsoft.com/forums/en-us
- 70 -
GLOSSARY
CAFAT – Computer Aided First Aid Trainer
KINECT – A device use for sensors to track skeleton of human.
CPR – Cardio Pulmonary Resuscitation
SDK – Software Development Kit
GAMEOBJECT – Base Class for all entities in unity scenes
TRANSFORM – Position, rotation and scale of an object
PREFAB – A type of asset, a reusable GameObject stored in project view
ASSET – A 3D Object
TERRAIN – A 3D plane which provides foundation for environment
MESH – A Class that allows creating or modifying vertices and multiple
triangle arrays
COLLISION – Describes a contact point where the physical collision shall
occur
RENDERING – It makes an object appear on screen
AUDIO SOURCE – Is attached to GameObject for playing back sounds in
3D environment
ANIMATION CLIP - Animation data that can be used for animated
characters or simple animations.
BODY MASK - A specification for which body parts to include or exclude
for a skeleton.
ANIMATION CURVES - Curves can be attached to animation clips and
controlled by various parameters from the game.
AVATAR - An interface for retargeting one skeleton to another.
RIGGING - The process of building a skeleton hierarchy of bone joints for
your mesh.
SKINNING - The process of binding bone joints to the character's mesh or
'skin'.
INVERSE KINEMATICS (IK) - The ability to control the character's
body parts based on various objects in the world.
top related