introduction to intelligent agents

61
Ontologi es Reasonin g Component s Agents Simulatio ns Introduction to Intelligent Agents Introduction to Intelligent Agents Jacques Robin

Upload: galena-faulkner

Post on 30-Dec-2015

76 views

Category:

Documents


4 download

DESCRIPTION

Introduction to Intelligent Agents. Jacques Robin. Outline. What are intelligent agents? Characteristics of artificial intelligence Applications and sub-fields of artificial intelligence Characteristics of agents Characteristics of agents’ environments Agent architectures. Software - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Introduction to Intelligent Agents

OntologiesReasoningComponentsAgentsSimulations

Introduction to Intelligent AgentsIntroduction to Intelligent Agents

Jacques Robin

Page 2: Introduction to Intelligent Agents

OutlineOutline

What are intelligent agents? Characteristics of artificial intelligence Applications and sub-fields of artificial intelligence Characteristics of agents Characteristics of agents’ environments Agent architectures

Page 3: Introduction to Intelligent Agents

What are Intelligent Agents?What are Intelligent Agents?

Q: What are Software Agents? A: Software which architecture is based on

the following abstractions: Immersion in a distributed environment,

continuous thread, encapsulation, sensor, perception, actuator, action, own goal, autonomous decision making

Q: What is Artificial Intelligence? A: Field of study dedicated to:

Reduce the range of tasks that humans carry out better than current software or robots

Emulate humans’ capability to solve approximately but efficiently most instances of problems proven (or suspected) hard to solve algorithmically (NP-Hard, Undecidable etc.) in the worst case, using innovative, often human inspired, alternative computational metaphors and techniques

Emulate humans’ capability to solve vaguely specified problems using partial, uncertain information

ArtificialIntelligence

Agents

DistributedSystems

SoftwareEngineering

Page 4: Introduction to Intelligent Agents

Artificial Intelligence: CharacteristicsArtificial Intelligence: Characteristics

Highly multidisciplinary inside and outside computer science Ran-away field - by definition - at the forefront of computation

tackling ever more innovative, challenging problems as the one it solved become mainstream computing

Most research in any other field of computation also involves AI problems, techniques, metaphors

Q: What conclusions can be derived from these characteristics? A: Hard to avoid; very, very hard to do well

“Well” as in: Well-founded (rigorously defined theoretical basis, explicit simplifying

assumptions and limitations) Easy to use (seamlessly integrated, easy to understand) Easy to reuse (general, extendable techniques) Scalable (at run time, at development time)

Page 5: Introduction to Intelligent Agents

What is an Agent?What is an Agent?General Minimal DefinitionGeneral Minimal Definition

Any entity (human, animal, robot, software): Situated in an environment (physical, virtual or simulated) that Perceives the environment through sensors (eyes, camera, socket) Acts upon the environment through effectors (hands, wheels,

socket) Possess its own goals, i.e., preferred states of the environments

(explicit or implicit) Autonomously chooses its actions to alter the environment

towards its goals based on its perceptions and prior encapsulated information about the environment

Processing cycle:1. Use sensor to perceive P2. Interprets I = f(P)3. Chooses the next action A = g(I,G) to perform to reach its goal G4. Use actuator to execute A

Page 6: Introduction to Intelligent Agents

What is an Agent? What is an Agent?

AutonomousReasoning

AgentAgent

En

viro

nm

en

t

Sensors

Effectors

Goals

Action Choice:A = g(I,O)

A

PerceptionInterpretation:

I = f(P)

P 1. Environment percepts2. Self-percepts3. Communicative

percepts

1. Environment altering actions

2. Perceptive actions3. Communicative

actions

Page 7: Introduction to Intelligent Agents

Agent x ObjectAgent x Object

Intentionality: Encapsulate own goals (even if implicitly) in addition to data and behavior

Decision autonomy: Pro-actively execute behaviors

to satisfy its goals Can negate request for

execution of a behavior from another agent

More complex input/output: percepts and actions

Temporal continuity: encapsulate an endless thread that constantly monitors the environment

Coarser granularity: Encapsulate code of size

comparable to a package or component

Composed of various objects when implemented using an OO language

No goal

No decision autonomy: Execute behaviors only

reactively whenever invoked by other objects

Always execute behavior invoked by other objects

Simpler input/output: mere method parameters and return values

Temporally discontinuous: active only during the execution of its methods

Page 8: Introduction to Intelligent Agents

Intelligent Agent Intelligent Agent x x Simple Software AgentSimple Software Agent

En

viro

nm

en

t

Sensors

Effectors

Goals

Percept Interpretation: I = f(P)

Action Choice: A = g(I,O)

ConventionalProcessing

ConventionalProcessing

AI

AI

Page 9: Introduction to Intelligent Agents

Intelligent AgentIntelligent Agent

Enviro

nm

ent

Sensors

Effectors

Goals

PerceptInterpretation

Action Choice

AI

AI

Situated AgentSituated Agent

Reasoning

InputData

OutputData

Goal

DisembodiedDisembodiedAIAI

SystemSystem

AI

Classical AI Classical AI SystemSystem

Page 10: Introduction to Intelligent Agents

What is an Agent? What is an Agent? Other Optional PropertiesOther Optional Properties

Reasoning AutonomyReasoning Autonomy:: Requires AI, inference engine and knowledge base Key for: embedded expert systems, intelligent controllers, robots,

games, internet agents ... AdaptabilityAdaptability::

Requires IA, machine learning Key for: internet agents, intelligent interfaces, ...

SociabilitySociability:: Requires AI + advanced distributed systems techniques:

Standard protocols for communication, cooperation, negotiation Automated reasoning about other agents’ beliefs, goals, plans and

trustfulness Social interaction architectures

Key for: multi-agent simulations, e-comerce, ...

Page 11: Introduction to Intelligent Agents

What is an Agent?What is an Agent?Other Optional PropertiesOther Optional Properties

PersonalityPersonality:: Requires AI, attitude and emotional modeling Key for: Digital entertainment, virtual reality avatars,

user-friendly interfaces ... Temporal continuity and persistenceTemporal continuity and persistence::

Requires interface with operating system, DBMS Key for: Information filtering, monitoring, intelligent control, ...

MobilityMobility:: Requires:

Network interface Secure protocols Mobile code support

Key for: information gathering agents, ... Security concerns prevented its adoption in practice

Page 12: Introduction to Intelligent Agents

Welcome to the Wumpus World!Welcome to the Wumpus World!

Agent-Oriented Formulation: Agents: gold digger Environment objects:

caverns, walls, pits, wumpus, gold, bow, arrow

Environment’s initial state Agents’ goals:

be alive cavern (1,1) with the gold Perceptions:

Touch sensor: breeze, bump Smell sensor: stench Light sensor: glitter Sound sensor: scream

Actions: Legs effector: forward, rotate 90º Hands effector: shoot, climb out

Page 13: Introduction to Intelligent Agents

Wumpus World: AbbreviationsWumpus World: Abbreviations

1

2

3

41 2 3

4

start

S

AB P

W

B

B

S

S, B, G P

P

B

B

GA - AgentW - WumpusP - PitG - GoldX? – Possibly XX! – Confirmed XV – Visited CavernB – BreezeS – StenchG – GlitterOK – Safe Cavern

Page 14: Introduction to Intelligent Agents

Perceiving, Reasoning and ActingPerceiving, Reasoning and Actingin the Wumpus Worldin the Wumpus World

Percept sequence:

t=2

1

2

3

41 2 3

4

Aok

ok

ok

t=0

nothing breeze

1

2

3

41 2 3

4

okA

ok

V

okP?

P?

b

Wumpus world model maintained by agent:

Page 15: Introduction to Intelligent Agents

1

2

3

41 2 3

4

ok

Aok

V Vbok

W!

s

ok

P!

stench

t=11: Go to (2,3) to find gold

1

2

3

41 2 3

4

ok

A

Sok

V Vbok

P!

W!

Vok

V

S B GP?

P?

ok

t=7: Go to (2,1), Sole safe unvisited cavern

Percept sequence:

Wumpus World Model:

Perceiving, Reasoning and ActingPerceiving, Reasoning and Actingin the Wumpus Worldin the Wumpus World

{stench, breeze, glitter}

Action Sequence:

Page 16: Introduction to Intelligent Agents

Classification DimensionsClassification Dimensionsof Agent Environmentsof Agent Environments

Agent environments can be classified as points in a multi-dimensional spaces

The dimensions are: Observability Determinism Dynamicity Mathematical domains of the variables Episodic or not Multi-agency Size Diversity

Page 17: Introduction to Intelligent Agents

ObservabilityObservability

Fully observable (or accessible):Fully observable (or accessible): Agent sensors perceive at each instant all the aspects of the

environment relevant to choose best action to take to reach goal Partially observable (or inaccessible, or with hidden variables)Partially observable (or inaccessible, or with hidden variables) Sources of partial observability:Sources of partial observability:

Realm inaccessible to any available sensor Limited sensor scope Limited sensor sensitivity Noisy sensors

Page 18: Introduction to Intelligent Agents

Determinism Determinism

Deterministic:Deterministic: all occurrence of executing a given action in a given situation always yields same result

Non-deterministic (or stochastic):Non-deterministic (or stochastic): action consequences partially unpredictable

Sources of non-determinism:Sources of non-determinism: Inherent to the environment: quantic granularity, games with

randomness Other agents with unknown or non-deterministic goal or action

policy Noisy effectors Limited granularity of effectors or of the representation used to

choose the actions to execute

Page 19: Introduction to Intelligent Agents

Dynamicity: StationaryDynamicity: Stationaryand Sequential Environments and Sequential Environments

Stationary: Single perception-reasoning-action cycle during which environment is static

Sequential: Sequence of perception-reasoning-action cycle during each of which the environment changes only as a result of the agent’s actions

Perc

ep

t

Stationary Environment

Agent

Ação

State 1 State 2

Reasoning

Perc

ep

t

Sequential Environment

Agent

Acti

on

State 1

Reasoning

Perc

ep

t

Acti

on

State 2

Reasoning

Perc

ep

t

Ação

State 3

Reasoning

State N...

Page 20: Introduction to Intelligent Agents

Synchronous: Environment can change on its own between one action and the next perception of an agent, but not during its reasoning

Asynchronous: Environment can change on its own at any time, including during the agent’s reasoning

Dynamicity: ConcurrentDynamicity: ConcurrentSynchronous and AsynchronousSynchronous and Asynchronous

...P

erc

ep

t

Synchronous ConcurrentEnvironment

Agent

Acti

on

State 1

Reasoning

Perc

ep

t

Acti

on

State 2

Reasoning

State 4 State 5

State 3

...

Perc

ep

t

Asynchronous ConcurrentEnvironment

Agent

Acti

on

State 1

Reasoning

State 2 State 4

State 3

Perc

ep

t

Acti

on

State 5

Reason

ing

State 6

Page 21: Introduction to Intelligent Agents

Multi-AgencyMulti-Agency

Sophistication of agent society: Number of agent roles and agent instances Multiplicity and dynamicity of agent roles Communication, cooperation and negotiation protocols

Main classes: Mono-agent Multi-agent cooperative Multi-agent competitive Multi-agent cooperative and competitive

With static or dynamic coalitions

Page 22: Introduction to Intelligent Agents

Mathematical Domain of VariablesMathematical Domain of Variables

Binárias Dicotômicas

Booleanas

Qualitativas Nominal

Ordinal

Quantitativas Intervalar

Fracional

Discreta

ContínuaR

[0,1]

MAS variables: Parameters of agent percepts, actions and goals Attributes of environment objects Arguments of environment relations, states, events and locations

Page 23: Introduction to Intelligent Agents

Mathematical Domain of VariablesMathematical Domain of Variables

Binary:Binary: Boolean, ex, Male {True,False} Dichotomic, ex, Sex {Male,

Female} Nominal (or categorical)Nominal (or categorical)

Finite partition of set without order nor measure

Relations: only = ou ex, Brazilian, French, British

Ordinal (or enumerated):Ordinal (or enumerated): Finite partition of (partially or

totally) ordered set without measure

Relations: only =, , , > ex, poor, medium, good, excellent

Interval:Interval: Finite partition of ordered set

with measure m defining distance d: X,Y, d(X,Y) = |m(X)-m(Y)|

No inherent zero ex, Celsius temperature

Fractional (or proportional):Fractional (or proportional): Partition with distance and

inherent zero Relations: anyone ex, Kelvin temperature

Continuous (or real)Continuous (or real) Infinite set of values

Page 24: Introduction to Intelligent Agents

Other CharacteristicsOther Characteristics

Episodic: Agent experience is divided in separate episodes Results of actions in each episode, independent of previous

episodes ex.: image classifier is episodic, chess is not soccer tournament is episodic, soccer game is not

Open environment: Partially observable, Non-deterministic, Non-episodic, Continuous

Variables, Concurrent Asynchronous, Multi-Agent. ex.: RoboCup, Internet, stock market

Page 25: Introduction to Intelligent Agents

Size and DiversitySize and Diversity

Size, Size, i.e.,i.e., number of instances of: Agent percepts, actions and

goals Environment agents, objects,

relations, states, events and locations

Dramatically affects scalability of agent reasoning execution

Diversity, Diversity, i.e., number of classes of: Agent percepts, actions and

goals Environment agents, objects,

relations, states, events and locations

Dramatically affects scalability of agent knowledge acquisition process

Page 26: Introduction to Intelligent Agents

Agents’ Internal ArchitecturesAgents’ Internal Architectures

Reflex agent (purely reactive) Automata agent (reactive with state) Goal-based agent Planning agent Hybrid, reflex-planning agent Utility-based agent (decision-theoretic) Layered agent Adaptive agent (learning agent)

Cognitive agent Deliberative agent

Page 27: Introduction to Intelligent Agents

Reflex AgentReflex Agent

En

viro

nm

en

t

Sensors

Effectors

RulesPercepts Action

A(t) = h(P(t))

Page 28: Introduction to Intelligent Agents

Remember … Remember …

Reasoning

AgentAgent

En

viro

nm

en

t

Sensors

Effectors

Goals

PerceptInterpretation:

I = f(P)

Action Choice:A = g(I,O)

A

P

Page 29: Introduction to Intelligent Agents

So?So?

Goals

Percept Interpretation: I = f(P)

Action Choice:A = g(I,O)

En

viro

nm

en

t

Sensors

Effectors

RulesPercepts Action

A(t) = h(P(t))

A

P

Page 30: Introduction to Intelligent Agents

Reflex AgentReflex Agent

Principle: Use rules (or functions, procedures) that associate directly percepts

to actions ex. IF speed > 60 THEN fine ex. IF front car’s stop light switches on THEN brake

Execute first rule which left hand side matches the current percepts Wumpus World example

IF visualPerception = glitter THEN action = pick see(glitter) do(pick) (logical representation)

Pros: Condition-action rules is a clear, modular, efficient representation

Cons: Lack of memory prevents use in partially observable, sequential, or

non-episodic environments ex, in the Wumpus World a reflex agent can’t remember which path

it has followed, when to go out of the cavern, where exactly are located the dangerous caverns, etc.

Page 31: Introduction to Intelligent Agents

Automata AgentAutomata Agent

Enviro

nm

ent

Sensors

Effectors

(Past and) CurrentEnviroment Model

Percept InterpretationRules:percepts(t) model(t) model’(t)

Action ChoiceRules:model’’(t) action(t), action(t) model’’(t) model(t+1)

Model UpdateRegras: model(t-1) model(t) model’(t) model’’(t)

Goals

Page 32: Introduction to Intelligent Agents

Automata AgentAutomata Agent

Rules associate actions to percept indirectly through the incremental construction of an environment model (internal state of the agent)

Action choice based on: current percepts + previous percepts + previous actions +

encapsulated knowledge of initial environment state Overcome reflex agent limitations with partially observable,

sequential and non-episodic environments Can integrate past and present perception to build rich

representation from partial observations Can distinguish between distinct environment states that are

indistinguishable by instantaneous sensor signals Limitations:

No explicit representation of the agents’ preferred environment states

For agents that must change goals many times to perform well, automata architecture is not scalable (combinatorial explosion of rules)

Page 33: Introduction to Intelligent Agents

Automata Agent Rule ExamplesAutomata Agent Rule Examples

Rules percept(t) model(t) model’(t) IF visualPercept at time T is glitter

AND location of agent at time T is (X,Y)THEN location of gold at time T is (X,Y)

X,Y,T see(glitter,T) loc(agent,X,Y,T) loc(gold,X,Y,T).

Rules model’(t) model’’(t) IF agent is with gold at time T

AND location of agent at time T is (X,Y)THEN location of gold at time T is (X,Y)

X,Y,T withGold(T) loc(agent,X,Y,T) loc(gold,X,Y,T).

Page 34: Introduction to Intelligent Agents

Automata Agent Rule ExamplesAutomata Agent Rule Examples

Rules model(t) action(t) IF location of agent at time T = (X,Y)

AND location of gold at time T = (X,Y) THEN choose action pick at time T

X,Y,T loc(agent,X,Y,T) loc(gold,X,Y,T) do(pick,T)

Rules action(t) model(t) model(t+1) IF choosen action at time T was pick

THEN agent is with gold at time T+1 T done(pick,T) withGold(T+1).

Page 35: Introduction to Intelligent Agents

(Explicit) Goal-Based Agent(Explicit) Goal-Based Agent

Enviro

nm

ent

Sensors

Effectors

(Past and) CurrentEnvironment Model

Percept InterpretationRules: percept(t) model(t) model’(t)

Action ChoiceRules: model’’(t) goals’(t) action(t) action(t) model’’(t) model(t+1)

Model UpdateRules: model(t-1) model(t) model’(t) model’’(t)

Goal UpdateRules: model’’(t) goals(t-1) goals’(t) Goals

Page 36: Introduction to Intelligent Agents

(Explicit) Goal-Based Agent(Explicit) Goal-Based Agent

Principle: explicit and dynamically alterable goals Pros:

More flexible and autonomous than automata agent Adapt its strategy to situation patterns summarized in its goals

Limitations: When current goal unreachable as the effect of a single action,

unable to plan sequence of actions Does not make long term plans Does not handle multiple, potentially conflicting active goals

Page 37: Introduction to Intelligent Agents

Goal-Based Agent Rule ExamplesGoal-Based Agent Rule Examples

Rule model(t) goal(t) action(t) IF goal of agent at time T is to return to (1,1) AND agent is in (X,Y) at time T AND orientation of agent is 90o at time T AND (X,Y+1) is safe at time T AND (X,Y+1) has not being visited until time T AND (X-1,Y) is safe at time T AND (X-1,Y) was visited before time T THEN choose action turn left at time T X,Y,T, (N,M,K goal(T,loc(agent,1,1,T+N)) loc(agent,X,Y,T) orientation(agent,90,T) safe(loc(X,Y+1),T) loc(agent,X,Y+1,T-M) safe(loc(X-1,Y),T) loc(agent,X,Y+1,T-K)) do(turn(left),T)

Y+1

ok

Yv ok

A

X-1 X

Page 38: Introduction to Intelligent Agents

Goal-Based Agent Rule ExamplesGoal-Based Agent Rule Examples

Rule model(t) goal(t) action(t) IF goal of agent at time T is to find gold AND agent is in (X,Y) at time T AND orientation of agent is 90o at time T AND (X,Y+1) is safe at time T AND (X,Y+1) has not being visited until time T AND (X-1,Y) is safe at time T AND (X-1,Y) was visited before time T THEN choose action forward at time T X,Y,T, (N,M,K goal(T,withGold(T+N)) loc(agent,X,Y,T) orientation(agent,90,T) safe(loc(X,Y+1),T) loc(agent,X,Y+1,T-M) safe(loc(X-1,Y),T) loc(agent,X,Y+1,T-K)) do(forward,T)

Y+1

ok

Yv ok

A

X-1 X

Page 39: Introduction to Intelligent Agents

Goal-Based Agent Rule ExamplesGoal-Based Agent Rule Examples

Rule model(t) Rule model(t) goal(t) goal(t) goal’(t) goal’(t)//If the agent reached it goal to hold the gold, //then its new goal shall be to go back to (1,1) IF goal of agent at time T-1 was to find gold AND agent is with gold at time T THEN goal of agent at time T+1 is to be in location (1,1) T, (N goal(agent,T-1,withGold(T+N)) withGold(T) M goal(agent,T,loc(agent,1,1,T+M))).

Page 40: Introduction to Intelligent Agents

Planning AgentPlanning Agent

Enviro

nm

ent

Sensors

Effectors

(Past and)Current

EnvironmentModel

Percept InterpretationRules: percept(t) model(t) model’(t)

Action ChoiceRules: model(t+n) = result([action1(t),...,actionN(t+n)] model(t+n) goal(t) do(action1(t))

Model UpdateRules: model(t-1) model(t) model’(t) model’’(t)

Goal UpdateRules: model’’(t) goals(t-1) goals’(t)

GoalsPrediction of Future EnvironmentsRules: model’’(t) model(t+n) model’’(t) action(t) model(t+1)

HypotheticalFuture

EnvironmentModels

Page 41: Introduction to Intelligent Agents

Planning AgentPlanning Agent

Percept and actions associated very indirectly through: Past and current environment model Past and current explicit goals Prediction of future environments resulting from different possible

action sequences to execute Rule chaining needed to build action sequence from rules

capture immediate consequences of a single action Pros:

Foresight allows choosing more relevant and safer actions in sequential environments

Cons: little point in building elaborated long term plans in, Highly non-deterministic environment (too many possibilities to

consider) Largely non-observable environments (not enough knowledge

available before acting) Asynchronous concurrent environment (only cheap reasoning can

reach a conclusion under time pressure)

Page 42: Introduction to Intelligent Agents

Planning ThreadPlanning Thread

Goals

Current,past and

futureenvironment

model

Current Model Update

Percept Interpretation

Goal Update

Future Environments Prediction

Action Choice

Hybrid Reflex-Planning AgentHybrid Reflex-Planning Agent

Enviro

nm

ent

Sensors

Effectors

Reflex ThreadReflex ThreadReflex RulesPercepts Actions

Synchronization

Page 43: Introduction to Intelligent Agents

Hybrid Reflex-Planning AgentHybrid Reflex-Planning Agent

Pros: Take advantage of all the time and knowledge available to choose

best possible action (within the limits of its prior knowledge and percepts)

Sophisticated yet robust Cons:

Costly to develop Same knowledge encoded in different forms in each component Global behavior coherence harder to guarantee Analysis and debugging hard due to synchronization issues Not that many environments feature large variations in available

reasoning time in different perception-reasoning-action cycles

Page 44: Introduction to Intelligent Agents

Layered AgentsLayered Agents

Many sensors/effectors are too fine-grained to reason about goals using directly the data/commands they provide

Such cases require a layered agent that decomposes its reasoning in multiple abstraction layers

Each layer represent the percepts, environment model, goals, and actions at a different level of details

Abstraction can consist in: Discretizing, approximating, clustering, classifying data from prior

layers along temporal, spatial, functional, social dimensions Detail can consist in:

Decomposing higher-level actions into lower-level ones along temporal, spatial, functional, social dimensions

Abstract

Decide Abstractly

Detail

Act in DetailPerceive in Detail

Page 45: Introduction to Intelligent Agents

Percept InterpretationPercept Interpretation

Am

bie

nte

Sensors

Effectors

Environment ModelEnvironment ModelEnvironment Model UpdateEnvironment Model Update

Action Choice and Execution ControlAction Choice and Execution Control

Layered Automata Agent

Layer0: f(x).dxy

Layer1: y).P(y)|P(zP(s)

Layer0: f(x).dxy

Layer1: y).P(y)|P(zP(s)

Layer2: q(A)r(B)B)s(A,

Layer2: q(A)r(B)B)s(A,

q(A)r(B)B)s(A, q(A)r(B)B)s(A, Layer2: Layer2:

Page 46: Introduction to Intelligent Agents

Exemplo de camadas de Exemplo de camadas de abstração: abstração:

XX

YY

Page 47: Introduction to Intelligent Agents

Abstraction Layer ExamplesAbstraction Layer Examples

XX

YY

Page 48: Introduction to Intelligent Agents

Utility-Based AgentUtility-Based Agent

Principle: Goals only express boolean agent preferences among environment states A utility function u allows expressing finer grained agent preferences

u can be defined on a variety of domains and ranges: actions, i.e., u: action R (or [0,1]), action sequences, i.e., u: [action1, ..., actionN] R (or [0,1]), environment states, i.e., u: environmentStateModel R (or [0,1]), environment state sequences, i.e., u: [state1, ..., stateN] R (or [0,1]), environment state, action pairs,

i.e., u: environmentStateModel x action R (or [0,1]), environment state, action pair sequences,

i.e., u: [(action1-state1), ..., (actionN-stateN)] R (or [0,1]), Pros:

Allows solving optimization problems aiming to find the best solution Allows trading-off among multiple conflicting goals with distinct

probabilities of being reached Cons:

Currently available methods to compute (even approximately) argmax(u) do not scale up to large or diverse environments

Page 49: Introduction to Intelligent Agents

Utility-Based Reflex AgentUtility-Based Reflex Agent

Enviro

nm

ent

Sensors

Effectors

Percept Interpretation:Rules: percept actions

Goals

Action Choice:Utility Functionu:actions R U(a))argmaxdo(

actionsa

Page 50: Introduction to Intelligent Agents

Utility-Based Planning Agent

Enviro

nm

ent

Sensors

Effectors

Past &Current

EnvironmentModel

Percept InterpretationRegras: percept(t) model(t) modelo’(t)

Model UpdateRegras: model’(t) model’’(t)

Future Environment PredictionRegras: model’’(t) ação(t) model(t+1) model’’(t) model(t+1)

HypothesizedFuture

EnvironmentsModel

Action Choice])))action actionU(result([argmaxdo( i

ni

i1

actioni1

...

Utility Function:u: model(t+n) R

Page 51: Introduction to Intelligent Agents

Learning Component

Performance Analysis Component

Adaptive AgentAdaptive Agent

Enviro

nm

ent

Sensors

Effectors

ActingComponent

New Problem Generation Component

• Reflex• Automata• Goal-Based• Planning• Utility-Based• Hybrid

• Learn rules or functions: • percept(t) action(t)• percept(t) model(t) modelo’(t)• modelo(t) modelo’(t)• modelo(t-1) modelo(t)• modelo(t) action(t)• action(t) model(t+1)• model(t) goal(t) action(t)• goal(t) model(t) goal’(t)• utility(action) = value• utility(model) = value

Page 52: Introduction to Intelligent Agents

Simulated EnvironmentsSimulated Environments

Environment simulator: Often themselves internally follow an agent architecture Should be able to simulate a large class of environments that can

be specialized by setting many configurable parameters either manually or randomly within a manually selected range ex, configure a generic Wumpus World simulator to generate world

instances with a square shaped cavern, a static wumpus and a single gold nugget where the cavern size, pit numbers and locations, wumpus and gold locations are randomly picked

Environment simulator processing cycle:1. Compute percept of each agent in current environment2. Send these percepts to the corresponding agents3. Receives the action chosen by each agent4. Update the environment to reflect the cumulative consequences

of all these actions

Page 53: Introduction to Intelligent Agents

EnvironmentEnvironmentSimulationSimulation

ServerServer

RedeRede

Environment Simulator ArchitectureEnvironment Simulator Architecture

SimulatedEnvironmentModel

SimulationVisualization

GUIEnvironment UpdateRules: model(t-1) model(t)action(t) model(t-1) model(t)

Percept GenerationRules: model(t) percept(t)

percepts

actionsAgent

Client 1

AgentClient N

...

Page 54: Introduction to Intelligent Agents

AI’s PluridisciplinarityAI’s Pluridisciplinarity

Philosophy

Mathematics:• Logic• Probabilities & Statistics• Calculus• Algebra

Psychology(Cognitive)

Economics Sociology

GameTheory

NeurologyZoology

PaleontologyDecisionTheory

LinguisticsOperations Research

InformationTheory

Computer Science:• Theory• Distributed Systems• Software Engineering• Databases

ArtificialIntelligence

Page 55: Introduction to Intelligent Agents

AI RoadmapAI Roadmap

Generic Tasks:• Clustering• Classification• Temporal Projection• Diagnosis• Monitoring• Repair• Control• Recommendation• Configuration• Discovery• Design• Allocation• Timetabling• Planning• Simulation

Specific Sub-Fields:• Multi-Agent Communication, Cooperation & Negotiation• Speech & Natural Language Processing• Computer Perception & Vision• Robotic Navigation & Manipulation• Games• Intelligent Tutoring Systems

+ P(A|B)

AI Metaphors, Abstractions

Problem

Generic Sub-Fields:• Heuristic Search• Automated Reasoning & Knowledge Representation• Machine Learning & Knowledge Acquisition• Pattern Recognition

Computational Metaphors:• Algorithmic Exploration• Logical Derivation• Probability Estimation• Connectionist Activation• Evolutionary Selection

Algorithm

Page 56: Introduction to Intelligent Agents

Today’s Diversity of AI ApplicationsToday’s Diversity of AI Applications

Agriculture, Natural Resource Management, and the Environment

Architecture & Design Art Artificial Noses Astronomy & Space Exploration Assistive Technologies

Banking, Finance & Investing

Bioinformatics Business & Manufacturing Drama, Fiction, Poetry, Storytelling & Mac

hine Writing

Earth & Atmospheric Sciences Engineering Filtering Fraud Detection & Prevention Hazards & Disasters Information Retrieval & Extraction Knowledge Management

Law Law Enforcement & Public Safety Libraries Marketing, Customer Relations & E-Com

merce

Medicine Military Music Networks - including Maintenance,

Security & Intrusion Detection Politics & Foreign Relations Public Health & Welfare Scientific Discovery Social Science Sports Telecommunications Transportation & Shipping Video Games, Toys. Robotic Pets &

Entertainment

Page 57: Introduction to Intelligent Agents

Examples of AI Applied to Banking, Examples of AI Applied to Banking, Finance and InvestmentFinance and Investment

Stock market (or currency) value prediction From set of publicly release data about company (or economy) From past market fluctuation of the company’s shares (or

exchange rates) From comparison with similar or concurrent stocks (or currencies) From multi-agent trading simulations

Trading software agents Beat human traders in a commodity trading contest in 2001 (BBC) 33% of electronic trading is AI-assisted (Financial Post, 2004)

Loan approval Fraud detection

Mining suspicious patterns in transaction logs Credit card fraud Insider trading Money laundering

Financial news filtering and summarization

Page 58: Introduction to Intelligent Agents

AI Pays !AI Pays !

AI Industry Gross Revenue: 2002: US $11.9 billions Annual growth rate: 12.2% Projection for 2007: $21.2 billions www.aaai.org/AITopics/html/stats.html

Companies specialized in AI: http://dmoz.org/Computers/Artificial_Intelligence/Companies/

Corporations developing and using AI: Google, Amazon, IBM, Microsoft, Yahoo, ...

Corporations using IA: www.businessweek.com/bw50/content/mar2003/a3826072.htm Wal-Mart, Abbot Labs, US Bancorp, LucasArts, Petrobrás, ...

Government agencies using AI: US National Security Agency

Page 59: Introduction to Intelligent Agents

When is a Machine Intelligent?When is a Machine Intelligent? What is Intelligence? What is Intelligence?

Who’s smarter?Who’s smarter? Your medical doctor or

your cleaning lady? Your lawyer or your two

year old daughter? Kasparov or Ronaldo? What did 40 years of AI

research discovered? Common sense

intelligence harder than expert intelligence

Embodied intelligence harder than purely intellectual, abstract intelligence

Kid intelligence harder than adult intelligence

Animal intelligence harder than specifically human intelligence (after all we share 99% of our genes with chimpanzees !)

1997:2 x 1

2050?2 x 1

Turing Test

??

Page 60: Introduction to Intelligent Agents

www.robocup.owww.robocup.orgrg

New benchmark task for AI Annual competition associated to conference on

AI, Robotics or Multi-Agent Systems

Page 61: Introduction to Intelligent Agents

Tomorrow’s AI Tomorrow’s AI ApplicationsApplications

MMAATTRRIIXX

BladeBlade RunnerRunner

AA..II..