introduction to ai and intelligent agents foundations of artificial intelligence

52
Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Upload: donald-tucker

Post on 14-Jan-2016

236 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Introduction to AI andIntelligent Agents

Foundations of Artificial Intelligence

Page 2: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 2

Some Definitions of AI

Building systems that think like humans “The exciting new effort to make computers think … machines with minds, in

the full and literal sense” -- Haugeland, 1985 “The automation of activities that we associate with human thinking, … such

as decision-making, problem solving, learning, …” -- Bellman, 1978

Building systems that act like humans “The art of creating machines that perform functions that require intelligence

when performed by people” -- Kurzweil, 1990 “The study of how to make computers do things at which, at the moment,

people are better” -- Rich and Knight, 1991

Page 3: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 3

Some Definitions of AI

Building systems that think rationally “The study of mental faculties through the use of computational models”

-- Charniak and McDermott, 1985 “The study of the computations that make it possible to perceive, reason, and

act” -- Winston, 1992

Building systems that act rationally “A filed of study that seeks to explain and emulate intelligent behavior in terms

of computational processes” -- Schalkoff, 1990 “The branch of computer science that is concerned with the automation of

intelligent behavior” -- Luger and Stubblefield, 1993

Page 4: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 4

Thinking and Acting Humanly Thinking humanly: cognitive modeling

Develop a precise theory of mind, through experimentation and introspection, then write a computer program that implements it

Example: GPS - General Problem Solver (Newell and Simon, 1961) trying to model the human process of problem solving in general

Acting humanly "If it looks, walks, and quacks like a duck, then it is a duck” The Turing Test

interrogator communicates by typing at a terminal with TWO other agents. The human can say and ask whatever s/he likes, in natural language. If the human cannot decide which of the two agents is a human and which is a computer, then the computer has achieved AI

this is an OPERATIONAL definition of intelligence, i.e., one that gives an algorithm for testing objectively whether the definition is satisfied

Page 5: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 5

Thinking and Acting Rationally Thinking Rationally

Capture ``correct'' reasoning processes” A loose definition of rational thinking: Irrefutable reasoning process How do we do this

Develop a formal model of reasoning (formal logic) that “always” leads to the “right” answer Implement this model

How do we know when we've got it right? when we can prove that the results of the programmed reasoning are correct soundness and completeness of first-order logic

Acting Rationally Act so that desired goals are achieved The rational agent approach (this is what we’ll focus on in this course) Figure out how to make correct decisions, which sometimes means thinking rationally

and other times means having rational reflexes correct inference versus rationality reasoning versus acting; limited rationality

Page 6: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Turing’s Goal Alan Turing, Computing Machinery and Intelligence, 1950:

Can machines think? How could we tell?

“I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.” — Alan Turing, Computing machinery and intelligence, 1950

Page 7: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Turing’s “Imitation Game”

Interrogator B (a person) A (a machine)

Page 8: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Necessary versus Sufficient Conditions

Is ability to pass a Turing Test a necessary condition of intelligence?

“May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.” — Turing, 1950

Is ability to pass a Turing Test a sufficient condition of intelligence?

Page 9: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

The Turing Syllogism If an agent passes a Turing Test,

then it produces a sensible sequence of verbal responses to a sequence of verbal stimuli.

If an agent produces a sensible sequence of verbal responses to a sequence of verbal stimuli, then it is intelligent.

Therefore, if an agent passes a Turing Test, then it is intelligent.

The Capacity Conception:If an agent has the capacity to produce a sensible sequence of verbal responses to a sequence of verbal stimuli, whatever they may be, then it is intelligent.

Page 10: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Memorizing all possible answers?(Bertha’s Machine)

Page 11: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 11

Exponential Growth Assume each time the judge asks a question, she picks between two

questions based on what has happened so far

Questions Asked Possible responses1 22 43 84 165 326 64n 2n

Page 12: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Storage versus Length

exponentialexponential

Page 13: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 13

n=10 n=20 n=30 n=40 n=50 n=60

n .00001second

.00002second

.00003second

.00004second

.00005second

.00006second

2n .001second

1.0second

17.9minutes

12.7days

35.7years

336centuries

3n .059second

58minutes

6.5years

3855centuries

2x108

centuries1.3x1013

centuries

(one algorithm step = 1 microsecond)

(Garvey & Johnson 1979)

Polynomial vs. exponential time complexity

Page 14: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

The Compact Conception

If an agent has the capacity to produce a sensible sequence of verbal responses to an arbitrary sequence of verbal stimuli without requiring exponential storage, then it is intelligent.

Page 15: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Size of the Universe

Here, now

Big bang

15*109 light-yearsT

ime

Page 16: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Storage Capacity of the Universe

Volume: (15*109 light-years)3 = (15*109*1016 meters)3

Density: 1 bit per (10-35 meters)3

Total storage capacity: 10184 bits < 10200 bits < 2670 bits

Critical Turing Test length: 670 bits < 670 characters

< 140 words < 1 minute

The universe is not big enough to hold a bertha machine

The universe is not big enough to hold a bertha machine

Page 17: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 17

Some Sub-fields of AI Problem solving

Lots of early success here Solving puzzles Playing chess Mathematics (integration) Uses techniques like search and problem reduction

Logical reasoning Prove things by manipulating database of facts Theorem proving

Automatic Programming Writing computer programs given some sort of description Some success with semi-automated methods Some error detection systems Automatic program verification

Page 18: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 18

Some Sub-fields of AI Language understanding and semantic modeling

One of the earliest problems Some success within limited domains How can we “understand” written/spoken language? Includes answering questions, translating between languages, learning from

written text, and speech recognition Some aspects of language understanding:

Associating spoken words with “actual” word Understanding language forms, such as prefixes/suffixes/roots Syntax; how to form grammatically correct sentences Semantics; understanding meaning of words, phrases, sentences Context Conversation

Page 19: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 19

Some Sub-fields of AI Pattern Recognition

Computer-aided identification of objects/shapes/sounds Needed for speech and picture understanding Requires signal acquisition, feature extraction, ... Data mining and Information Retrieval

Expert Systems and Knowledge-based Systems Designers often called knowledge engineers Translate things that an expert knows and rules that an expert uses to make

decisions into a computer program Problems include

Knowledge acquisition (or how do we get the information) Explanation (of the answers) Knowledge models (what do we do with info) Handling uncertainty

Page 20: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 20

Some Sub-fields of AI Planning, Robotics and Vision

Planning how to perform actions Manipulating devices Recognizing objects in pictures

Machine Learning and Neural Networks Can we “remember” solutions, rather than recalculating them? Can we learn additional facts from present data? Can we model the physical aspects of the brain? Classification and clustering

Non-monotonic Reasoning Truth maintenance systems

Page 21: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 21

Fundamental Techniques of AI Knowledge Representation

Intelligence/intelligent behavior requires knowledge, which is: Voluminous Hard to characterize Constantly changing

How can one capture formally (i.e., computerize) everything needed for intelligent behavior? Some questions...

How do you store all of that data in a useful way? Can you get rid of some? How can you store decision making steps?

Characteristics of good data representation techniques: Captures general situation rather than being overly specific Understandable by the people who provide it Easily modified to handle errors, changes in data, and changes in perception Of general use

Page 22: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 22

Fundamental Techniques of AI Search

How can we model the problem search space How can we move between steps in a decision making process?

How can you find the info you need in a large data set? Given a choice of possible decision sequences, how do you pick a

good one? Heuristic functions

Given a goal, how do you figure out what to do (planning)? Base-level versus meta-level reasoning

How can we reason about what step to take next (in reaching the goal)?

How much do we reason before acting?

Page 23: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 23

AI in Everyday Life? AI techniques are used in many common applications

Intelligent user interfaces Search Engines Spell/grammar checkers Context sensitive help systems Medical diagnosis systems Regulating/Controlling hardware devices and processes (e.g, in automobiles) Voice/image recognition (more generally, pattern recognition) Scheduling systems (airlines, hotels, manufacturing) Error detection/correction in electronic communication Program verification / compiler and programming language design Web search engines / Web spiders Web personalization and Recommender systems (collaborative/content filtering) Personal agents Customer relationship management Credit card verification in e-commerce / fraud detection Data mining and knowledge discovery in databases Computer games

Page 24: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 24

AI Spin-Offs Many technologies widely used today were the direct or indirect

results of research in AI: The mouse Time-sharing Graphical user interfaces Object-oriented programming Computer games Hypertext Information Retrieval The World Wide Web Symbolic mathematical systems (e.g., Mathematica, Maple, etc.) Very high-level programming languages Web agents Data Mining

Page 25: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 25

What is an Intelligent Agent An agent is anything that can

perceive its environment through sensors, and act upon that environment through actuators (or effectors)

Goal: Design rational agents that do a “good job” of acting in their environments success determined based on some objective performance measure

actuators

Page 26: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 26

Example: Vacuum Cleaner Agent

Percepts: location and contents, e.g., [A, Dirty] Actions: Left, Right, Suck, NoOp

Page 27: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 27

What is an Intelligent Agent Rational Agents

An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful.

Performance measure: An objective criterion for success of an agent's behavior. E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up,

amount of time taken, amount of electricity consumed, amount of noise generated, etc.

Definition of Rational Agent: For each possible percept sequence, a rational agent should select an action that is

expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Omniscience, learning, autonomy Rationality is distinct from omniscience (all-knowing with infinite knowledge)

Choose action that maximizes expected value of perf. measure given percept to date

Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration)

An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)

Page 28: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 28

What is an Intelligent Agent Rationality depends on

the performance measure that defines degree of success the percept sequence - everything the agent has perceived so far what the agent know about its environment the actions that the agent can perform

Agent Function (percepts ==> actions) Maps from percept histories to actions f: P* A The agent program runs on the physical architecture to produce the function f agent = architecture + program

Action := Function(Percept Sequence)

If (Percept Sequence) then do Action

Example: A Simple Agent Function for Vacuum World

If (current square is dirty) then suck

Else move to adjacent square

Page 29: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 29

What is an Intelligent Agent Limited Rationality

Optimal (i.e. best possible) rationality is NOT perfect success: limited sensors, actuators, and computing power may make this impossible

Theory of NP-completeness: some problems are likely impossible to solve quickly on ANY computer

Both natural and artificial intelligence are always limited Degree of Rationality: the degree to which the agent’s internal "thinking"

maximizes its performance measure, given the available sensors the available actuators the available computing power the available built-in knowledge

Page 30: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 30

PEAS Analysis To design a rational agent, we must specify the task environment

PEAS Analysis: Specify Performance Measure, Environment, Actuators, Sensors

Example: Consider the task of designing an automated taxi driver Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard

Page 31: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 31

PEAS Analysis – More Examples Agent: Medical diagnosis system

Performance measure: Healthy patient, minimize costs, lawsuits Environment: Patient, hospital, staff Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings, patient's answers)

Agent: Part-picking robot Performance measure: Percentage of parts in correct bins Environment: Conveyor belt with parts, bins Actuators: Jointed arm and hand Sensors: Camera, joint angle sensors

Page 32: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 32

PEAS Analysis – More Examples

Agent: Internet Shopping Agent

Performance measure?? Environment?? Actuators?? Sensors??

Page 33: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 33

Environment Types

Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at

each point in time.

Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state

and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic).

Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists

of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.

Page 34: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 34

Environment Types (cont.)

Static (vs. dynamic): The environment is unchanged while an agent is deliberating (the environment

is semi-dynamic if the environment itself does not change with the passage of time but the agent's performance score does).

Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions.

Single agent (vs. multi-agent): An agent operating by itself in an environment.

Page 35: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 35

Environment Types (cont.)

The environment type largely determines the agent design.

The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent

Page 36: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 36

Structure of an Intelligent Agent All agents have the same basic structure:

accept percepts from environment generate actions

A Skeleton Agent:

Observations: agent may or may not build percept sequence in memory (depends on domain) performance measure is not part of the agent; it is applied externally to judge

the success of the agent

function Skeleton-Agent(percept) returns action static: memory, the agent's memory of the world memory Update-Memory(memory, percept) action Choose-Best-Action(memory) memory Update-Memory(memory, action) return action

function Skeleton-Agent(percept) returns action static: memory, the agent's memory of the world memory Update-Memory(memory, percept) action Choose-Best-Action(memory) memory Update-Memory(memory, action) return action

Page 37: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 37

Looking Up the Answer? A Template for a Table-Driven Agent:

Why can't we just look up the answers? The disadvantages of this architecture

infeasibility (excessive size) lack of adaptiveness

How big would the table have to be? Could the agent ever learn from its mistakes? Where should the table come from in the first place?

function Table-Driven-Agent(percept) returns action static: percepts, a sequence, initially empty table, a table indexed by percept sequences, initially fully specified

append percept to the end of percepts action LookUp(percepts, table)return action

function Table-Driven-Agent(percept) returns action static: percepts, a sequence, initially empty table, a table indexed by percept sequences, initially fully specified

append percept to the end of percepts action LookUp(percepts, table)return action

Page 38: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 38

Agent Types Simple reflex agents

are based on condition-action rules and implemented with an appropriate production system. They are stateless devices which do not have memory of past world states.

Reflex Agents with memory (Model-Based) have internal state which is used to keep track of past states of the world.

Agents with goals are agents which in addition to state information have a kind of goal

information which describes desirable situations. Agents of this kind take future events into consideration.

Utility-based agents base their decision on classic axiomatic utility-theory in order to act rationally.

Note: All of these can be turned into “learning” agentsNote: All of these can be turned into “learning” agents

Page 39: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 39

A Simple Reflex Agent

function Simple-Reflex-Agent(percept) returns action static: rules, a set of condition-action rules

state Interpret-Input(percept) rule Rule-Match(state, rules) action Rule-Action[rule] return action

function Simple-Reflex-Agent(percept) returns action static: rules, a set of condition-action rules

state Interpret-Input(percept) rule Rule-Match(state, rules) action Rule-Action[rule] return action

We can summarize part of the table by formulating commonly occurring patterns as condition-action rules:

Example:

if car-in-front-brakes

then initiate braking Agent works by finding a

rule whose condition matches the current situation rule-based systems

But, this only works if the current percept is sufficient for making the correct decision

Page 40: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 40

Example: Simple Reflex Vacuum Agent

Page 41: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 41

Agents that Keep Track of the World

function Reflex-Agent-With-State(percept) returns action static: rules, a set of condition-action rules state, a description of the current world

state Update-State(state, percept) rule Rule-Match(state, rules) action Rule-Action[rule] state Update-State(state, action) return action

function Reflex-Agent-With-State(percept) returns action static: rules, a set of condition-action rules state, a description of the current world

state Update-State(state, percept) rule Rule-Match(state, rules) action Rule-Action[rule] state Update-State(state, action) return action

Updating internal state requires two kinds of encoded knowledge knowledge about how the world

changes (independent of the agents’ actions)

knowledge about how the agents’ actions affect the world

But, knowledge of the internal state is not always enough how to choose among

alternative decision paths (e.g., where should the car go at an intersection)?

Requires knowledge of the goal to be achieved

Page 42: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 42

Agents with Explicit Goals

Reasoning about actions reflex agents only act based on pre-computed knowledge (rules) goal-based (planning) act by reasoning about which actions achieve the

goal less efficient, but more adaptive and flexible

Page 43: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 43

Agents with Explicit Goals Knowing current state is not always enough.

State allows an agent to keep track of unseen parts of the world, but the agent must update state based on knowledge of changes in the world and of effects of own actions.

Goal = description of desired situation

Examples: Decision to change lanes depends on a goal to go somewhere (and other factors); Decision to put an item in shopping basket depends on a shopping list, map of

store, knowledge of menu

Notes: Search (Russell Chapters 3-5) and Planning (Chapters 11-13) are concerned with

finding sequences of actions to satisfy a goal. Reflexive agent concerned with one action at a time. Classical Planning: finding a sequence of actions that achieves a goal. Contrast with condition-action rules: involves consideration of future "what will

happen if I do ..." (fundamental difference).

Page 44: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 44

A Complete Utility-Based Agent

Utility Function a mapping of states onto real numbers allows rational decisions in two kinds of situations

evaluation of the tradeoffs among conflicting goals evaluation of competing goals

Page 45: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 45

Utility-Based Agents (Cont.) Preferred world state has higher utility for agent = quality of

being useful

Examples quicker, safer, more reliable ways to get where going; price comparison shopping bidding on items in an auction evaluating bids in an auction

Utility function: state ==> U(state) = measure of happiness

Search (goal-based) vs. games (utilities).

Page 46: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 46

Shopping Agent Example Navigating: Move around store; avoid obstacles

Reflex agent: store map precompiled. Goal-based agent: create an internal map, reason explicitly about it, use signs

and adapt to changes (e.g., specials at the ends of aisles).

Gathering: Find and put into cart groceries it wants, need to induce objects from percepts. Reflex agent: wander and grab items that look good. Goal-based agent: shopping list.

Menu-planning: Generate shopping list, modify list if store is out of some item. Goal-based agent: required; what happens when a needed item is not there?

Achieve the goal some other way. e.g., no milk cartons: get canned milk or powdered milk.

Choosing among alternative brands utility-based agent: trade off quality for price.

Page 47: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 47

General Architecture for Goal-Based Agents

Simple agents do not have access to their own performance measure In this case the designer will "hard wire" a goal for the agent, i.e. the designer will choose

the goal and build it into the agent

Similarly, unintelligent agents cannot formulate their own problem this formulation must be built-in also

The while loop above is the "execution phase" of this agent's behavior Note that this architecture assumes that the execution phase does not require

monitoring of the environment

Input perceptstate Update-State(state, percept)goal Formulate-Goal(state, perf-measure)search-space Formulate-Problem (state, goal)plan Search(search-space , goal)while (plan not empty) do action Recommendation(plan, state) plan Remainder(plan, state) output actionend

Input perceptstate Update-State(state, percept)goal Formulate-Goal(state, perf-measure)search-space Formulate-Problem (state, goal)plan Search(search-space , goal)while (plan not empty) do action Recommendation(plan, state) plan Remainder(plan, state) output actionend

Page 48: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 48

Learning Agents

Four main components: Performance element: the agent function Learning element: responsible for making improvements by observing performance Critic: gives feedback to learning element by measuring agent’s performance Problem generator: suggest other possible courses of actions (exploration)

Page 49: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 49

Search and Knowledge Representation

Goal-based and utility-based agents require representation of: states within the environment actions and effects (effect of an action is transition from the current state to

another state) goals utilities

Problems can often be formulated as a search problem to satisfy a goal, agent must find a sequence of actions (a path in the state-

space graph) from the starting state to a goal state.

To do this efficiently, agents must have the ability to reason with their knowledge about the world and the problem domain which path to follow (which action to choose from) next how to determine if a goal state is reached OR how decide if a satisfactory

state has been reached.

Page 50: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 50

Intelligent Agent Summary An agent perceives and acts in an environment. It has an

architecture and is implemented by a program. An ideal agent always chooses the action which maximizes its

expected performance, given the percept sequence received so far.

An autonomous agent uses its own experience rather than built-in knowledge of the environment by the designer.

An agent program maps from a percept to an action and updates its internal state.

Reflex agents respond immediately to percepts. Goal-based agents act in order to achieve their goal(s). Utility-based agents maximize their own utility function.

Page 51: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 51

Exercise

Do Exercise 1.3, on Page 30 You can find out about the Loebner Prize at:

http://www.loebner.net/Prizef/loebner-prize.html

Also (for discussion) look at exercise 1.2 and read the material on the Turing Test at:

http://plato.stanford.edu/entries/turing-test/

Read the article by Jennings and Wooldridge (“Applications of Intelligent Agents”). Compare and contrast the definitions of agents and intelligent agents as given by Russell and Norvig (in the text book) and and in the article.

Page 52: Introduction to AI and Intelligent Agents Foundations of Artificial Intelligence

Foundations of Artificial Intelligence 52

Exercise

News Filtering Internet Agent uses a static user profile (e.g., a set of keywords specified by the user) on a regular basis, searches a specified news site (e.g., Reuters or AP) for news

stories that match the user profile can search through the site by following links from page to page presents a set of links to the matching stories that have not been read before

(matching based on the number of words from the profile occurring in the news story)

(1) Give a detailed PEAS description for the news filtering agent (2) Characterize the environment type (as being observable,

deterministic, episodic, static, etc).