artificial intelligence: chapter 2 week 2 and 3
TRANSCRIPT
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
1/43
ARTIFICIAL INTELLIGENCE
AGENTS
Dr. Zeeshan Bhatti
BSSW-PIV
Chapter 2
Institute of Information and Communication TechnologyUniversity of Sindh, Jamshoro BY: DR. ZEESHAN BHATTI 1
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
2/43
By: Dr. Zeeshan Bhatti
Last Time: Acting Humanly: The Full Turing Test
Alan Turing's 1950 article Computing Machinery and Intelligencediscussed conditions for considering a machine to be intelligent
Can machines think? Can machines behave intelligently?
The Turing test (The Imitation Game): Operational definition ofintelligence.
Computer needs to possess: Natural language processing, Knowledgerepresentation, Automated reasoning, and Machine learning
Problem: 1) Turing test is not reproducible, constructive, and amenable tomathematic analysis. 2) What about physical interaction with interrogatorand environment?
Total Turing Test: Requires physical interaction and needs perception andactuation.
2
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
3/43
By: Dr. Zeeshan Bhatti
Last time: The Turing Test
http://www.ai.mit.edu/projects/infolab/http://aimovie.warnerbros.com 3
http://www.ai.mit.edu/projects/infolab/http://www.ai.mit.edu/projects/infolab/ -
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
4/43
By: Dr. Zeeshan Bhatti
Last time: The Turing Test
http://www.ai.mit.edu/projects/infolab/http://aimovie.warnerbros.com 4
http://www.ai.mit.edu/projects/infolab/http://www.ai.mit.edu/projects/infolab/ -
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
5/43
By: Dr. Zeeshan Bhatti
Last time: The Turing Test
http://www.ai.mit.edu/projects/infolab/http://aimovie.warnerbros.com5
http://www.ai.mit.edu/projects/infolab/http://www.ai.mit.edu/projects/infolab/ -
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
6/43
By: Dr. Zeeshan Bhatti
Last time: The Turing Test
http://www.ai.mit.edu/projects/infolab/http://aimovie.warnerbros.com6
http://www.ai.mit.edu/projects/infolab/http://www.ai.mit.edu/projects/infolab/ -
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
7/43
By: Dr. Zeeshan Bhatti
Last time: The Turing Test
http://www.ai.mit.edu/projects/infolab/http://aimovie.warnerbros.com7
http://www.ai.mit.edu/projects/infolab/http://www.ai.mit.edu/projects/infolab/ -
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
8/43
By: Dr. Zeeshan Bhatti
This time: Outline
Intelligent Agents (IA) Environment types
IA Behavior
IA Structure
IA Types
8
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
9/43
By: Dr. Zeeshan Bhatti
What is an (Intelligent) Agent?
An over-used, over-loaded, and misused term.
Anything that can be viewedasperceiving itsenvironment through sensors and acting upon that
environment through its effectors or actuators tomaximize progress towards its goals.
9
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
10/43
What is an (Intelligent) Agent?
A human agent has eyes, ears, and other organs for sensors andhands, legs, vocal tract, and so on for actuators.
A robotic agent might have cameras and infrared range finders forsensors and various motors for actuators.
A software agent receives keystrokes, file contents, and networkpackets as sensory inputs and acts on the environment bydisplaying on the screen, writing files, and sending networkpackets.
By: Dr. Zeeshan Bhatti 10
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
11/43
By: Dr. Zeeshan Bhatti 11
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
12/43
Agents and environments
We use the term percept to refer to the agents perceptual inputsat any given instant.
An agents percept sequence is the complete history ofeverything the agent has ever perceived.
Mathematically speaking, we say that an agents behaviour isdescribed by the agent function that maps any given perceptsequence to an action.
Internally, the agent function for an artificial agent will be
implemented by an agent program. It is important to keep these two ideas distinct. The agent function
is an abstract mathematical description; the agent program is aconcrete implementation, running within some physical system.
By: Dr. Zeeshan Bhatti 12
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
13/43
Agents and environments
The agent function maps from percept histories to actions:
[f: P* A]
The agent program runs on the physical architecture to produce f
agent = architecture + program
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
14/43
Example: Vacuum-cleaner world
This particular world has just two locations: squares A and B.
The vacuum agent perceives which square it is in and whetherthere is dirt in the square. It can choose to move left, move right,suck up the dirt, or do nothing.
Percepts: location and contents, e.g., [A,Dirty]
Actions: Left, Right, Suck, NoOp
One very simple agent function is the following: if the currentsquare is dirty, then suck; otherwise, move to the other square.
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
15/43
Looking at Figure 2.3, we see that various vacuum-world agents can bedefined simply by filling in the right-hand column in various ways.
The obvious question, then, is this: What is the right way to fill out thetable? In other words, what makes an agent good or bad, intelligent orstupid? We answer these questions in the next section.
By: Dr. Zeeshan Bhatti 15
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
16/43
By: Dr. Zeeshan Bhatti
What is an (Intelligent) Agent?
PAGE (Percepts, Actions, Goals, Environment)
Task-specific & specialized: well-defined goals
and environment The notion of an agent is meant to be a tool for
analyzing systems,
It is not a different hardware or new programming
languages
16
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
17/43
By: Dr. Zeeshan Bhatti
Example: Human mind as network of thousands ormillions of agents working in parallel. To produce realartificial intelligence, this school holds, we should buildcomputer systems that also contain many agents andsystems for arbitrating among the agents' competingresults.
Distributed decision-makingand control
Challenges:
Action selection: What next actionto choose
Conflict resolution
Intelligent Agents and Artificial Intelligence
sen
sors
effectors
Agency
17
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
18/43
By: Dr. Zeeshan Bhatti
Agent Types
We can split agent research into two main strands:
Distributed Artificial Intelligence (DAI)Multi-Agent Systems (MAS) (1980 1990)
Much broader notion of "agent" (1990s present)
interface, reactive, mobile, information
18
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
19/43
By: Dr. Zeeshan Bhatti
Rational Agents
EnvironmentAgent
percepts
actions
?
Sensors
Effectors
How to design this?
19
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
20/43
By: Dr. Zeeshan Bhatti
Remember: the Beobot example
20
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
21/43
By: Dr. Zeeshan Bhatti
A Windshield Wiper Agent
How do we design a agent that can wipe the windshieldswhen needed?
Goals?
Percepts? Sensors?
Effectors?
Actions?
Environment?
21
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
22/43
By: Dr. Zeeshan Bhatti
A Windshield Wiper Agent (Contd)
Goals: Keep windshields clean & maintain visibility
Percepts: Raining, Dirty Sensors: Camera (moist sensor)
Effectors: Wipers (left, right, back)
Actions: Off, Slow, Medium, Fast
Environment: Inner city, freeways, highways, weather
22
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
23/43
By: Dr. Zeeshan Bhatti
Towards Autonomous Vehicles
http://iLab.usc.edu
http://beobots.org23
http://ilab.usc.edu/http://beobots.org/http://beobots.org/http://ilab.usc.edu/ -
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
24/43
By: Dr. Zeeshan Bhatti
Interacting Agents: Exercise
Collision Avoidance Agent (CAA) Goals: Avoid running into obstacles
Percepts ?
Sensors?
Effectors ?
Actions ?
Environment: Freeway
Lane Keeping Agent (LKA)
Goals: Stay in current lane
Percepts ? Sensors?
Effectors ?
Actions ?
Environment: Freeway24
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
25/43
By: Dr. Zeeshan Bhatti
Interacting Agents
Collision Avoidance Agent (CAA) Goals: Avoid running into obstacles
Percepts: Obstacle distance, velocity, trajectory
Sensors: Vision, proximity sensing
Effectors: Steering Wheel, Accelerator, Brakes, Horn, Headlights
Actions: Steer, speed up, brake, blow horn, signal (headlights)
Environment: Freeway
Lane Keeping Agent (LKA)
Goals: Stay in current lane
Percepts: Lane center, lane boundaries Sensors: Vision
Effectors: Steering Wheel, Accelerator, Brakes
Actions: Steer, speed up, brake
Environment: Freeway25
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
26/43
By: Dr. Zeeshan Bhatti
Conflict Resolution by Action Selection Agents
Override: CAA overrides LKA
Arbitrate: if Obstacle is Close then CAAelse LKA
Compromise: Choose action that satisfies bothagents
Any combination of the above
Challenges: Doing the right thing
26
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
27/43
GOOD BEHAVIOR:THE CONCEPT OF RATIONALITY
By: Dr. Zeeshan Bhatti 27
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
28/43
Rational agents
Rational Agent: For each possible perceptsequence, a rational agent should select anaction that is expected to maximize itsperformance measure, given the evidence
provided by the percept sequence and whateverbuilt-in knowledge the agent has.
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
29/43
What is Rational Agent?
A rational agent is one that does the right thingconceptually speaking, every entry in the table for theagent function is filled out correctly.
Obviously, doing the right thing is better than doing thewrong thing, but what does it mean to do the rightthing?
by considering the consequences of the agentsbehaviour.
By: Dr. Zeeshan Bhatti 29
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
30/43
Rational agents
Rationality is distinct from omniscience (all-knowing withinfinite knowledge)
Agents can perform actions in order to modify futurepercepts so as to obtain useful information (information
gathering, exploration)
An agent is autonomous if its behavior is determined byits own experience (with ability to learn and adapt)
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
31/43
Rational agents: Performance Measure
An agent should strive to "do the right thing", based onwhat it can perceive and the actions it can perform. Theright action is the one that will cause the agent to bemost successful.
Performance measure: An objective criterion forsuccess of an agent's behavior in any given sequence ofenvironment states.
E.g., performance measure of a vacuum-cleaner agentcould be amount of dirt cleaned up, amount of timetaken, amount of electricity consumed, amount of noisegenerated, etc.
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
32/43
As a general rule, it is better to design performancemeasures according to what one actually wants in theenvironment, rather than according to how one thinksthe agent should behave
By: Dr. Zeeshan Bhatti 32
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
33/43
Rationality?
What is rational at any given time depends on four things:
The performance measure that defines the criterion of success.
The agents prior knowledge of the environment.
The actions that the agent can perform.
The agents percept sequence to date.
This leads to a definition of a rational agent:
For each possible percept sequence, a rational agent should select anaction that is expected to maximize its performance measure, given theevidence provided by the percept sequence and whatever built-inknowledge the agent has.
By: Dr. Zeeshan Bhatti 33
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
34/43
Consider the simple vacuum-cleaner agent that cleans a square if it is
dirty and moves to the other square if not; this is the agent functiontabulated in Figure 2.3. Is this a rational agent?
That depends! First, we need to say what the performance measure is,what is known about the environment, and what sensors and actuatorsthe agent has. Let us assume the following:
The performance measure awards one point for each clean squareat each time step, over a lifetime of 1000 time steps.
Thegeography of the environment is known a priori (Figure 2.2)but the dirt distribution and the initial location of the agent are not.
Clean squares stay clean and sucking cleans the current square. The
Left and Right actions move the agent left and right except when thiswould take the agent outside the environment, in which case the agentremains where it is.
The only available actions are Left , Right, and Suck.
The agent correctly perceives its location and whether that location
contains dirt By: Dr. Zeeshan Bhatti 34
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
35/43
We claim that under these circumstances the agent isindeed rational; its expected performance is at least as
high as any other agents.
By: Dr. Zeeshan Bhatti 35
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
36/43
One can see easily that the same agent would be irrational underdifferent circumstances.
For example, once all the dirt is cleaned up, the agent will oscillateneedlessly back and forth;
if the performance measure includes a penalty of one point for each
movement left or right, the agent will fare poorly. A better agent for this case would do nothing once it is sure that all
the squares are clean.
If clean squares can become dirty again, the agent shouldoccasionally check and re-clean them if needed.
If the geography of the environment is unknown, the agent willneed to explore it rather than stick to squares A and B.
By: Dr. Zeeshan Bhatti 36
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
37/43
Exercise: Home Work.
Task: Let us examine the rationality of various vacuum-cleaner agentfunctions.
a. Show that the simple vacuum-cleaner agent function described inFigure 2.3 is indeed rational under the assumptions listed on page 38.
b. Describe a rational agent function for the case in which each
movement costs one point. Does the corresponding agent programrequire internal state?
c. Discuss possible agent designs for the cases in which clean squarescan become dirty and the geography of the environment is unknown.Does it make sense for the agent to learn from its experience in thesecases? If so, what should it learn? If not, why not?
By: Dr. Zeeshan Bhatti 37
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
38/43
By: Dr. Zeeshan Bhatti
The Right Thing = The Rational Action
Rational Action: The action that maximizes theexpected value of the performance measure given thepercept sequence to date
Rational = Best ?
Rational = Optimal ?
Rational = Omniscience ?
Rational = Clairvoyant ?
Rational = Successful ?
38
(Clairvoyant = Intuitive, Psychic, Telepathic)
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
39/43
By: Dr. Zeeshan Bhatti
The Right Thing = The Rational Action
Rational Action: The action that maximizes theexpected value of the performance measure given thepercept sequence to date
Rational = Best Yes, to the best of its knowledge
Rational = Optimal Yes, to the best of its abilities (incl.
Rational Omniscience its constraints)
Rational Clairvoyant
Rational Successful
39
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
40/43
By: Dr. Zeeshan Bhatti
Behavior and performance of IAs
Perception (sequence) toAction Mapping:f : P* A
Ideal mapping: specifies which actions an agent ought to take atany point in time
Description: Look-Up-Table, Closed Form, etc.
Performance measure: a subjectivemeasure tocharacterize how successful an agent is (e.g., speed,power usage, accuracy, money, etc.)
(degree of)Autonomy: to what extent is the agent ableto make decisions and take actions on its own?
40
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
41/43
By: Dr. Zeeshan Bhatti
Look up table
agent
obstacle
sensor
Distance Action
10 No action
5 Turn left 30degrees
2 Stop
41
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
42/43
By: Dr. Zeeshan Bhatti
Closed form
Output (degree of rotation) = F(distance)
E.g., F(d) = 10/d (distance cannot be less than 1/10)
42
-
7/25/2019 Artificial Intelligence: Chapter 2 Week 2 and 3
43/43
Thankyou
Q & AReferred BookArtificial Intelligence: A Modern Approach., 3rd Edition, byStuart Russell and Peter Norvig, Prentice-Hall, 2003
For Course Slides and Handouts
web page:https://sites.google.com/site/drzeeshanacademy/Blog:
http://zeeshanacademy.blogspot.com/
Facebook:https://www.facebook.com/drzeeshanacademy
https://sites.google.com/site/drzeeshanacademy/http://zeeshanacademy.blogspot.com/http://zeeshanacademy.blogspot.com/https://sites.google.com/site/drzeeshanacademy/