csce 552 fall 2012
DESCRIPTION
CSCE 552 Fall 2012. AI. By Jijun Tang. Homework 3. List of AI techniques in games you have played; Select one game and discuss how AI enhances its game play or how its AI can be improved Due Nov 28th. Command Hierarchy. Strategy for dealing with decisions at different levels - PowerPoint PPT PresentationTRANSCRIPT
CSCE 552 Fall 2012
AI
By Jijun Tang
Homework 3
List of AI techniques in games you have played;
Select one game and discuss how AI enhances its game play or how its AI can be improved
Due Nov 28th
Command Hierarchy
Strategy for dealing with decisions at different levels From the general down to the foot soldier
Modeled after military hierarchies General directs high-level strategy Foot soldier concentrates on combat
Dead Reckoning
Method for predicting object’s future position based on current position, velocity and acceleration
Works well since movement is generally close to a straight line over short time periods
Can also give guidance to how far object could have moved
Example: shooting game to estimate the leading distance
Emergent Behavior
Behavior that wasn’t explicitly programmed
Emerges from the interaction of simpler behaviors or rules Rules: seek food, avoid walls Can result in unanticipated individual or
group behavior
Flocking/Formation
Mapping Example
Level-of-Detail AI
Optimization technique like graphical LOD Only perform AI computations if player will
notice For example
Only compute detailed paths for visible agents Off-screen agents don’t think as often
Manager Task Assignment
Manager organizes cooperation between agents Manager may be invisible in game Avoids complicated negotiation and
communication between agents Manager identifies important tasks and
assigns them to agents For example, a coach in an AI football team
Example
Amit [to Steve]: Hello, friend! Steve [nods to Bryan]: Welcome to CGDC. [Amit exits left.]
Amit.turns_towards(Steve); Amit.walks_within(3); Amit.says_to(Steve, "Hello, friend!"); Amit.waits(1); Steve.turns_towards(Bryan); Steve.walks_within(5); Steve.nods_to(Bryan); Steve.waits(1); Steve.says_to(Bryan, "Welcome to CGDC."); Amit.waits(3); Amit.face_direction(DIR_LEFT); Amit.exits();
Example
Player escapes in combat, pop Combat off, goes to search; if not find the player, pop Search off, goes to patrol, …
Example
Bayesian Networks
Performs humanlike reasoning when faced with uncertainty
Potential for modeling what an AI should know about the player Alternative to cheating
RTS Example AI can infer existence or nonexistence of
player build units
Example
Bayesian Networks
Inferring unobserved variables Parameter learning Structure learning
Blackboard Architecture
Complex problem is posted on a shared communication space Agents propose solutions Solutions scored and selected Continues until problem is solved
Alternatively, use concept to facilitate communication and cooperation
Decision Tree Learning
Constructs a decision tree based on observed measurements from game world
Best known game use: Black & White Creature would learn and form
“opinions” Learned what to eat in the world based
on feedback from the player and world
Filtered Randomness
Filters randomness so that it appears random to players over short term
Removes undesirable events Like coin coming up heads 8 times in a row
Statistical randomness is largely preserved without gross peculiarities
Example: In an FPS, opponents should randomly spawn
from different locations (and never spawn from the same location more than 2 times in a row).
Genetic Algorithms
Technique for search and optimization that uses evolutionary principles
Good at finding a solution in complex or poorly understood search spaces
Typically done offline before game ships Example:
Game may have many settings for the AI, but interaction between settings makes it hard to find an optimal combination
Flowchat
N-Gram Statistical Prediction
Technique to predict next value in a sequence
In the sequence 18181810181, it would predict 8 as being the next value
Example In street fighting game, player just did
Low Kick followed by Low Punch Predict their next move and expect it
Neural Networks
Complex non-linear functions that relate one or more inputs to an output
Must be trained with numerous examples Training is computationally expensive making
them unsuited for in-game learning Training can take place before game ships
Once fixed, extremely cheap to compute
Example
Planning
Planning is a search to find a series of actions that change the current world state into a desired world state
Increasingly desirable as game worlds become more rich and complex
Requires Good planning algorithm Good world representation Appropriate set of actions
Player Modeling
Build a profile of the player’s behavior Continuously refine during gameplay Accumulate statistics and events
Player model then used to adapt the AI Make the game easier: player is not good at
handling some weapons, then avoid Make the game harder: player is not good at
handling some weapons, exploit this weakness
Production (Expert) Systems
Formal rule-based system Database of rules Database of facts Inference engine to decide which rules trigger –
resolves conflicts between rules Example
Soar used experiment with Quake 2 bots Upwards of 800 rules for competent opponent
Reinforcement Learning
Machine learning technique Discovers solutions through trial and
error Must reward and punish at appropriate
times Can solve difficult or complex problems
like physical control problems Useful when AI’s effects are uncertain
or delayed
Reputation System
Models player’s reputation within the game world
Agents learn new facts by watching player or from gossip from other agents
Based on what an agent knows Might be friendly toward player Might be hostile toward player
Affords new gameplay opportunities “Play nice OR make sure there are no
witnesses”
Smart Terrain
Put intelligence into inanimate objects Agent asks object how to use it: how to
open the door, how to set clock, etc Agents can use objects for which they
weren’t originally programmed for Allows for expansion packs or user created
objects, like in The Sims Enlightened by Affordance Theory
Objects by their very design afford a very specific type of interaction
Speech Recognition
Players can speak into microphone to control some aspect of gameplay
Limited recognition means only simple commands possible
Problems with different accents, different genders, different ages (child vs adult)
Text-to-Speech
Turns ordinary text into synthesized speech Cheaper than hiring voice actors Quality of speech is still a problem
Not particularly natural sounding Intonation problems Algorithms not good at “voice acting”: the mouth
needs to be animated based on the text Large disc capacities make recording human
voices not that big a problem No need to resort to worse sounding solution
Weakness Modification Learning
General strategy to keep the AI from losing to the player in the same way every time
Two main steps1. Record a key gameplay state that precedes a
failure
2. Recognize that state in the future and change something about the AI behavior AI might not win more often or act more intelligently,
but won’t lose in the same way every time Keeps “history from repeating itself”
Artificial Intelligence: Pathfinding
PathPlannerApp Demo
Representing the Search Space
Agents need to know where they can move Search space should represent either
Clear routes that can be traversed Or the entire walkable surface
Search space typically doesn’t represent: Small obstacles or moving objects
Most common search space representations: Grids Waypoint graphs Navigation meshes
Grids
2D grids – intuitive world representation Works well for many games including
some 3D games such as Warcraft III Each cell is flagged
Passable or impassable Each object in the world can occupy
one or more cells
Characteristics of Grids
Fast look-up Easy access to neighboring cells Complete representation of the level
Waypoint Graph
A waypoint graph specifies lines/routes that are “safe” for traversing
Each line (or link) connects exactly two waypoints
Characteristicsof Waypoint Graphs
Waypoint node can be connected to any number of other waypoint nodes
Waypoint graph can easily represent arbitrary 3D levels
Can incorporate auxiliary information Such as ladders and jump pads Radius of the path
Navigation Meshes
Combination of grids and waypoint graphs Every node of a navigation mesh represents
a convex polygon (or area) As opposed to a single position in a waypoint
node Advantage of convex polygon
Any two points inside can be connected without crossing an edge of the polygon
Navigation mesh can be thought of as a walkable surface
Navigation Meshes (continued)
Computational Geometry
CGAL (Computational Geometry Algorithm Library)
Find the closest phone Find the route from point A to B Convex hull
Example—No Rotation
Space Split
Resulted Path
Improvement
Example 2—With Rotation
Example 3—Visibility Graph
Random Trace
Simple algorithm Agent moves towards goal If goal reached, then done If obstacle
Trace around the obstacle clockwise or counter-clockwise (pick randomly) until free path towards goal
Repeat procedure until goal reached
Random Trace (continued)
How will Random Trace do on the following maps?
Random Trace Characteristics
Not a complete algorithm Found paths are unlikely to be optimal Consumes very little memory
A* Pathfinding
Directed search algorithm used for finding an optimal path through the game world
Used knowledge about the destination to direct the search
A* is regarded as the best Guaranteed to find a path if one exists Will find the optimal path Very efficient and fast
Understanding A*
To understand A* First understand Breadth-First, Best-First,
and Dijkstra algorithms These algorithms use nodes to
represent candidate paths
Class Definition
class PlannerNode
{
public:
PlannerNode *m_pParent;
int m_cellX, m_cellY;
...
};
The m_pParent member is used to chain nodes sequentially together to represent a path
Data Structures
All of the following algorithms use two lists The open list The closed list
Open list keeps track of promising nodes When a node is examined from open list
Taken off open list and checked to see whether it has reached the goal
If it has not reached the goal Used to create additional nodes Then placed on the closed list
Overall Structure of the Algorithms
1. Create start point node – push onto open list2. While open list is not empty
A. Pop node from open list (call it currentNode)B. If currentNode corresponds to goal, break from
step 2C. Create new nodes (successors nodes) for cells around currentNode and push them onto open listD. Put currentNode onto closed list
Breadth-First
Finds a path from the start to the goal by examining the search space ply-by-ply
Breadth-First Characteristics
Exhaustive search Systematic, but not clever
Consumes substantial amount of CPU and memory
Guarantees to find paths that have fewest number of nodes in them Not necessarily the shortest distance!
Complete algorithm
Best-First
Uses problem specific knowledge to speed up the search process
Head straight for the goal Computes the distance of every node
to the goal Uses the distance (or heuristic cost) as a
priority value to determine the next node that should be brought out of the open list
Best-First (continued)
Best-First (continued)
Situation where Best-First finds a suboptimal path
Best-First Characteristics
Heuristic search Uses fewer resources than Breadth-
First Tends to find good paths
No guarantee to find most optimal path Complete algorithm
Dijkstra
Disregards distance to goal Keeps track of the cost of every path No guessing
Computes accumulated cost paid to reach a node from the start Uses the cost (called the given cost) as a
priority value to determine the next node that should be brought out of the open list
Dijkstra Characteristics
Exhaustive search At least as resource intensive as
Breadth-First Always finds the most optimal path Complete algorithm
Example
A*
Uses both heuristic cost and given cost to order the open list
Final Cost = Given Cost + (Heuristic Cost * Heuristic Weight)
A* Characteristics
Heuristic search On average, uses fewer resources than
Dijkstra and Breadth-First Admissible heuristic guarantees it will find
the most optimal path Complete algorithm
Example
Start Node and Costs
F=G+H
First Move
Second Move
Cost Map
Path
Pathfinding with Constraints
More Example