omar khaled enayet – 4 th year fcis – computer science department – august 2009 concerning...

22
Latest AI research in real-time strategy games Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling.

Upload: adam-lawrence-powers

Post on 22-Dec-2015

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Latest AI research in real-time strategy games

Omar Khaled Enayet – 4th Year FCIS – Computer Science Department – August 2009

concerning planning, learning, Adaptation and opponent Modeling.

Page 2: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Agenda

Introduction Real-Time Strategy Games. Why is AI Development slow in RTS Games. AI Areas needing more research in RTS Games.

Latest Research Introduction. Research Papers and Theses.▪ Introduction▪ The Papers : Intro▪ Case-Based Planning.▪ Reinforcement Learning.▪ Genetic Algorithms.▪ Hybrid Approaches.▪ Opponent Modeling Approaches.▪ Misc. Approaches.

Page 3: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Introduction

Page 4: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Real-Time Strategy Games Real-Time-Strategy (RTS) games can be viewed as

simplified military simulations. Several players struggle over resources scattered over a terrain by setting up an economy, building armies, and guiding them into battle in real-time.

The current AI performance in commercial RTS games is poor by human standards.

They are characterized by enormous state spaces, large decision spaces, and asynchronous interactions.

RTS games also require reasoning at several levels of granularity, production-economic facility (usually expressed as resource management and technological development) and tactical skills necessary for combat confrontation.

Page 5: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Why is AI Development slow in RTS Games ?

RTS game worlds feature many objects, imperfect information, micro actions, and fast-paced action. By contrast, World–class AI players mostly exist for slow– paced, turn–based, perfect information games in which the majority of moves have global consequences and planning abilities therefore can be outsmarted by mere enumeration.

Market dictated AI resource limitations. Up to now popular RTS games have been released solely by game companies who naturally are interested in maximizing their profit. Because graphics is driving games sales and companies strive for large market penetration only about 15% of the CPU time and memory is currently allocated for AI tasks. On the positive side, as graphics hardware is getting faster and memory getting cheaper, this percentage is likely to increase – provided game designers stop making RTS game worlds more realistic.

Lack of AI competition. In classic two–player games tough competition among programmers has driven AI research to unmatched heights. Currently, however, there is no such competition among real–time AI researchers in games other than computer soccer. The considerable man–power needed for designing and implementing RTS games and the reluctance of game companies to incorporate AI APIs in their products are big obstacles to AI competition in RTS games.

Page 6: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

AI Areas needing more research Adversarial real–time planning. In fine–grained realistic

simulations, agents cannot afford to think in terms of micro actions such as “move one step North”. Instead, abstractions of the world state have to be found that allow AI programs to conduct forward searches in a manageable abstract space and to translate found solutions back into action sequences in the original state space. Because the environment is also dynamic, hostile, and smart — adversarial real–time planning approaches need to be investigated.

Decision making under uncertainty. Initially, players are not aware of the enemies’ base locations and intentions. It is necessary to gather intelligence by sending out scouts and to draw conclusions to adapt. If no data about opponent locations and actions is available yet, plausible hypotheses have to be formed and acted upon.

Opponent modeling, learning. One of the biggest shortcomings of current (RTS) game AI systems is their inability to learn quickly. Human players only need a couple of games to spot opponents’ weaknesses and to exploit them in future games. New efficient machine learning techniques have to be developed to tackle these important problems.

Page 7: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

AI Areas needing more research (2) Spatial and temporal reasoning. Static and dynamic

terrain analysis as well as understanding temporal relations of actions is of utmost importance in RTS games — and yet, current game AI programs largely ignore these issues and fall victim to simple common–sense reasoning .

Resource management. Players start the game by gathering local resources to build up defenses and attack forces, to upgrade weaponry, and to climb up the technology tree. At any given time the players have to balance the resources they spend in each category. For instance, a player who chooses to invest too many resources into upgrades, will become prone to attacks because of an insufficient number of units. Proper resource management is therefore a vital part of any successful strategy

Page 8: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

AI Areas needing more research (3) Collaboration. In RTS games groups of players can join forces

and intelligence. How to coordinate actions effectively by communication among the parties is a challenging research problem. For instance, in case of mixed human/AI teams, the AI player often behaves awkwardly because it does not monitor the human’s actions, cannot infer the human’s intentions, and fails to synchronize attacks.

Pathfinding. Finding high–quality paths quickly in 2D terrains is of great importance in RTS games. In the past, only a small fraction of the CPU time could be devoted to AI tasks, of which finding shortest paths was the most time consuming. Hardware graphics accelerators are now allowing programs to spend more time on AI tasks. Still, the presence of hundreds of moving objects and the urge for more realistic simulations in RTS games make it necessary to improve and generalize pathfinding algorithms. Keeping unit formations and taking terrain properties, minimal turn radii, inertia, enemy influence, and fuel consumption into account greatly complicates the once simple problem of finding shortest paths.

Page 9: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Latest Research

Page 10: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Latest Research : Intro. Current Implementation of RTS Games applies

extensive usage of FSM that makes them highly predictable.

Adaptation is achieved either through Learning or planning or a mixture of both

Planning is beginning to appear in commercial games such as DemiGod and Latest Total War Game.

Learning has limited success so far. Developers are experimenting on replacing the

ordinary decision making systems (FSM, FUSM, Scripting, Decision Trees, and Markov Systems) with Learning Techniques

Page 11: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Latest Research : The Papers

More than 30 papers/theses talk about Planning and Learning in RTS Games

The Major 3 approaches to AI Research in RTS-GAMES concerning Learning and Planning are Case-Based Planning, Reinforcement Learning with its different techniques and Genetic Algorithms. Some Papers use a Hybrid approach of these techniques. Others use other planning algorithms like PDDL or opponent modeling techniques and other misc. techniques.

3 papers encourage the research in this field. 9 papers use Case-Based Planning Approach from 2003-2009,1

uses a Hybrid CBR/GA approach in 2008,1 uses a Hybrid CBR/RL approach in 2007

10 papers use Reinforcement Learning with its different forms (Monte-Carlo, Dynamic Scripting and TD-Learning),1 uses TD-Learning with GA,1 uses Dynamic Scripting with GA

3 Papers use Genetic Algorithms. 3 Papers apply opponent modeling techniques.

Page 12: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Encouraging Research : Papers RTS Games and Real–Time AI

Research – 2003 RTS Gaines A New AI Research

Challenge – 2003 Call for AI Research in RTS Games -

2004

Page 13: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Case-Based Planning

Case-based planning is the reuse of past successful plans in order to solve new planning problems.

It’s an application of Case-Based Reasoning in planning.

Page 14: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Case-Based Planning : Papers

The David Aha Research Thread : On the Role of Explanation for Hierarchical Case-Based Planning

in RTS Games - after 2004 Learning to Win - Case-Based Plan Selection in a RTS Game-

2005 Defeating Novel Opponents in a Real-Time Strategy Game –

2005 The Santiago Ontanon Research Thread :

Case-Based Planning and Execution for RTS Games – 2007 Learning from Human Demonstrations for Real-Time Case-

Based Planning – 2008 On-Line Case-Based Plan Adaptation for RTS Games- 2008 Situation Assessment for Plan Retrieval in RTS Games – 2009

Other Papers Case-based plan recognition for RTS games - after 2003 Mining Replays of RTS Games to learn player strategies – 2007

Page 15: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Reinforcement Learning

It is a sub-area of machine learning concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward.

Reinforcement learning differs from the supervised learning problem in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected.

Further, there is a focus on on-line performance, which involves finding a balance between exploration and exploitation.

Page 16: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Reinforcement Learning : Papers Dynamic Scripting :

Goal-Directed Hierarchical Dynamic Scripting for RTS Games – 2006 Automatically Acquiring Domain Knowledge For Adaptive Game AI

Using Evolutionary Learning – 2008 Monte-Carlo Planning :

UCT(Monte-Carlo) for Tactical Assault Battles in Real-Time Strategy Games. – 2003

Monte Carlo Planning in RTS Games - After 2004 Temporal-Difference Learning :

Learning Unit Values in Wargus Using Temporal Differences – 2005 Establishing an Evaluation Function for RTS games - After 2005

Dynamic Scripting VS Monte-Carlo Planning: Adaptive reinforcement learning agents in RTS games – 2008

Hierarchical Reinforcement Learning Hierarchical Reinforcement Learning in Computer Games - After 2006 Hierarchical Reinforcement Learning with Deictic repr. in a computer

game- After 2006

Page 17: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Genetic Algorithms

Genetic algorithms are a particular class of evolutionary algorithms (EA) that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover.

Page 18: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Genetic Algorithms : Papers

Human-like Behavior in RTS Games – 2003

Co-evolving Real-Time Strategy Game Playing Influence Map Trees with genetic algorithms

Co-Evolution in Hierarchical AI for Strategy Games - after 2004

Page 19: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Hybrid Approaches : Papers

Genetic Algorithms + Dynamic Scripting : Improving Adaptive Game AI With Evolutionary

Learning – 2004 Automatically Acquiring Domain Knowledge For

Adaptive Game AI using Evolutionary Learning – 2005 Genetic Algorithms + TD-Learning :

Neural Networks in RTS AI – 2001 Genetic Algorithms + Case-Based Planning :

Stochastic Plan Optimization in Real-Time Strategy Games – 2008

Case-Based Reasoning + Reinforcement Learning : Transfer Learning in Real-Time Strategy Games Using

Hybrid CBR-RL - 2007

Page 20: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Opponent Modeling : Papers

Hierarchical Opponent Models for Real-Time Strategy Games – 2007

Opponent modeling in real-time strategy games - after 2007

Design of Autonomous Systems - Learning Adaptive playing a RTS game - 2009

Page 21: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

Misc. Approaches : Papers

Supervised Learning : Player Adaptive Cooperative Artificial Intelligence

for RTS Games – 2007 PDDL :

A First Look at Build-Order Optimization in RTS games - after 2006

Finite-State Machines : SORTS - A Human-Level Approach to Real-Time

Strategy AI – 2007 Others :

Real-time challenge balance in an RTS game using rtNEAT – 2008

AI Techniques in RTS Games -September 2006

Page 22: Omar Khaled Enayet – 4 th Year FCIS – Computer Science Department – August 2009 concerning planning, learning, Adaptation and opponent Modeling

References

RTS Games and Real–Time AI Research –Michael Buro & Timothy M. Furtak - 2003

Call for AI Research in RTS Games - Michael Buro – 2004

AIGameDev Forums. GameDev.Net Forums. Wikipedia. Others