true. for example, in an environment with a single state ... · said, it can happen that there is a...

7
Page 1 of 7 Course: Introduction to Artificial Intelligence Second Semester 2015/2016 Eng. Mohamed B. Abubaker Exercises for Chapter 2 and 3 1. For each of the following assertions, say whether it is true or false and support your answer with examples or counterexamples where appropriate. a. An agent that senses only partial information about the state cannot be perfectly rational. False. Perfect rationality refers to the ability to make good decisions given the sensor information received. b. There exist task environments in which no pure reflex agent can behave rationally. True. A pure reflex agent ignores previous percepts, so cannot obtain an optimal state estimate in a partially observable environment. c. There exists a task environment in which every agent is rational. True. For example, in an environment with a single state, such that all actions have the same output, it doesn’t matter which action is taken. d. The input to an agent program is the same as the input to the agent function. False. The agent function takes as input the entire percept sequence up to that point (percept history), whereas the agent program takes the current percept only. e. Every agent function is implementable by some program/machine combination. False. An agent function is an abstract mathematical description while the agent program is a concrete implementation running within some physical system. Since the agent function is just an abstract description it is completely possible that there exists cases in which an agent program will fail due to it running out of memory. f. Suppose an agent selects its action uniformly at random from the set of possible actions. There exists a deterministic task environment in which this agent is rational. True. This is a special case of (c); if it doesn’t matter which action you take, selecting randomly is rational. g. It is possible for a given agent to be perfectly rational in two distinct task environments. True. For example, we can arbitrarily modify the parts of the environment that are unreachable by any optimal policy as long as they stay unreachable. h. Every agent is rational in an unobservable environment. False. For example, the vacuum agent that cleans. If the agent moved but did not clean, it would not be rational. i. A perfectly rational game-playing agent never loses. False, what if two perfect rational game-playing agents played against each other, one of them doesn’t win.

Upload: others

Post on 24-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: True. For example, in an environment with a single state ... · said, it can happen that there is a cycle of iterations between goal formulation, problem formulation, and problem

Page 1 of 7

Course: Introduction to Artificial Intelligence

Second Semester 2015/2016

Eng. Mohamed B. Abubaker

Exercises for Chapter 2 and 3

1. For each of the following assertions, say whether it is true or false and support your answer with examples

or counterexamples where appropriate.

a. An agent that senses only partial information about the state cannot be perfectly rational.

False. Perfect rationality refers to the ability to make good decisions given the sensor information received.

b. There exist task environments in which no pure reflex agent can behave rationally.

True. A pure reflex agent ignores previous percepts, so cannot obtain an optimal state estimate in a partially observable

environment.

c. There exists a task environment in which every agent is rational.

True. For example, in an environment with a single state, such that all actions have the same output, it doesn’t matter which

action is taken.

d. The input to an agent program is the same as the input to the agent function.

False. The agent function takes as input the entire percept sequence up to that point (percept history), whereas the agent

program takes the current percept only.

e. Every agent function is implementable by some program/machine combination.

False. An agent function is an abstract mathematical description while the agent program is a concrete

implementation running within some physical system. Since the agent function is just an abstract description it

is completely possible that there exists cases in which an agent program will fail due to it running out of memory.

f. Suppose an agent selects its action uniformly at random from the set of possible actions. There exists a

deterministic task environment in which this agent is rational.

True. This is a special case of (c); if it doesn’t matter which action you take, selecting randomly is rational.

g. It is possible for a given agent to be perfectly rational in two distinct task environments.

True. For example, we can arbitrarily modify the parts of the environment that are unreachable by any optimal

policy as long as they stay unreachable.

h. Every agent is rational in an unobservable environment.

False. For example, the vacuum agent that cleans. If the agent moved but did not clean, it would not be rational.

i. A perfectly rational game-playing agent never loses.

False, what if two perfect rational game-playing agents played against each other, one of them doesn’t win.

Page 2: True. For example, in an environment with a single state ... · said, it can happen that there is a cycle of iterations between goal formulation, problem formulation, and problem

Page 2 of 7

2. For each of the following activities, give (1) PEAS description of the task environment. (2) Characteristic

(properties) of the task environment.

Playing soccer.

Shopping for used AI books on the Internet.

Playing a tennis match.

Practicing tennis against a wall.

Mathematician’s theorem-proving assistant

Autonomous Mars rover

PEAS description of the task environment

Activity (Agent

Type)

Performance

Measure Environment Actuators Sensors

Playing soccer. goal scoring ratio and the teams win / loss ratio

Soccer playground, ball, own team, other team

Devices (e.g., legs) for locomotion and kicking

Camera, accelerometers, orientation sensors

Shopping for AI books on the Internet.

Obtain requested/ interesting books, minimize cost

Internet websites display to user

Keyboard entry, Browser used to find the Web pages

Playing a tennis match.

points scoring ratio and the teams win / loss ratio

Tennis playground, ball, players

Devices (e.g., legs, hands) for running and kicking

Camera, accelerometers, orientation sensors

Practicing tennis against a wall.

Getting better for real matches against other players; improved performance

wall Devices (e.g., legs, hands) for running and kicking

Camera, accelerometers, orientation sensors

Mathematician’s theorem-proving assistant

good math knowledge, can prove theorems accurately and in minimal steps/time

Internet, library display Keyboard entry

Autonomous Mars rover

Terrain explored and reported, samples gathered and analyzed

Launch vehicle, lander, Mars

Wheels/legs, sample collection device, analysis devices, radio transmitter

Camera, touch sensors, accelerometers, orientation sensors, radio receiver

Characteristic (properties) of the task environment

Task

environment Observable Deterministic Episodic Static Discrete Agents

Playing soccer. Partially stochastic sequential dynamic continuous multi-agent Shopping for AI books on the Internet.

Partially deterministic sequential Static* discrete single agent

Playing a tennis match.

Fully stochastic episodic dynamic continuous multi-agent

Practicing tennis against a wall.

Fully stochastic episodic dynamic continuous single agent

Mathematician’s theorem-proving assistant

Fully deterministic sequential static discrete single agent

Autonomous Mars rover

Partially stochastic sequential dynamic continuous single-agent

Page 3: True. For example, in an environment with a single state ... · said, it can happen that there is a cycle of iterations between goal formulation, problem formulation, and problem

Page 3 of 7

3. Define in your own words the following terms: agent, agent function, agent program, rationality,

autonomy, reflex agent, model-based agent, goal-based agent, utility-based agent, learning agent.

Agent:

o An agent is anything that can be viewed as perceiving its environment through sensors and acting upon

that environment through actuators

agent function:

o Agent function that maps any given percept sequence to an action.

agent program:

o that program which, combined with a machine architecture, implements an agent function.

Rationality:

o a property of agents that choose actions that maximize their expected utility, given the percepts to date.

Autonomy:

o a property of agents whose behavior is determined by their own experience rather than solely by their

initial programming.

reflex agent:

o an agent whose action depends only on the current percept.

model-based agent:

o an agent whose action is derived directly from an internal model of the current world state that is updated

over time.

goal-based agent:

o an agent that selects actions that it believes will achieve explicitly represented goals

utility-based agent:

o an agent that selects actions that it believes will maximize the expected utility of the outcome state.

Learning agent:

o an agent whose behavior improves over time based on its experience.

4. Explores the differences between agent functions and agent programs for the following:

a. Can there be more than one agent program that implements a given agent function? Give an example,

or show why one is not possible.

Yes, the agent program is the code for implementing the agent function. Thus, if a function has multiple options

then there must be more than one program.

b. Are there agent functions that cannot be implemented by any agent program?

Yes, For example, if an agent function was to count to find the square root of a negative number. There is no

way to solve that.

c. Given a fixed machine architecture, does each agent program implement exactly one agent function?

Yes; the agent’s behavior is fixed by the architecture and program.

d. Given an architecture with n bits of storage, how many different possible agent programs are there?

There are 2n agent programs

Page 4: True. For example, in an environment with a single state ... · said, it can happen that there is a cycle of iterations between goal formulation, problem formulation, and problem

Page 4 of 7

5. Explain why problem formulation must follow goal formulation.

In goal formulation, we decide which aspects of the world we are interested in, and which can be ignored or abstracted away. Then in problem formulation we decide how to manipulate the important aspects (and ignore the others). If we did problem formulation first we would not know what to include and what to leave out. That said, it can happen that there is a cycle of iterations between goal formulation, problem formulation, and problem solving until one arrives at a sufficiently useful and efficient solution.

6. Define in your own words the following terms: state, state space, search tree, search node, goal,

action, successor function, and branching factor.

A state is a situation that an agent can find itself in. We distinguish two types of states: world states (the actual concrete situations in the real world) and representational states (the abstract descriptions of the real world that are used by the agent in deliberating about what to do). A state space is a graph whose nodes are the set of all states, and whose links are actions that transform one state into another. A search tree is a tree (a graph with no undirected loops) in which the root node is the start state and the set of children for each node consists of the states that reachable by taking any action. A search node is a node in the search tree. A goal is a state that the agent is trying to reach. An action is something that the agent can choose to do. A successor function described the agent’s options: given a state, it returns a set of (action, state) pairs, where each state is the state reachable by taking the action. The branching factor in a search tree is the number of actions available to the agent.

7. Consider a state space where the start state is number 1 and each state k has two successors:

numbers 2k and 2k + 1.

a. Draw the portion of the state space for states 1 to 15.

b. Suppose the goal state is 11. List the order in which nodes will be visited for breadth-

first search, depth-limited search with limit 3, and iterative deepening search.

a.

b. Breadth-first: 1 2 3 4 5 6 7 8 9 10 11 Depth-limited: 1 2 4 8 9 5 10 11 Iterative deepening: 1; 1 2 3; 1 2 4 5 3 6 7; 1 2 4 8 9 5 10 11

Page 5: True. For example, in an environment with a single state ... · said, it can happen that there is a cycle of iterations between goal formulation, problem formulation, and problem

Page 5 of 7

8. Prove each of the following statements:

a. Breadth-first search is a special case of uniform-cost search.

Uniform Cost Search reduces to Breadth First Search when all edges have the same cost.

b. Breadth-first search, uniform-cost search, and Depth-first search is a special case of

best-first tree search.

best-first search reduces to Breadth-First Search when f(n)=number of edges from start node to n, best-first search reduces to uniform-cost search when f(n)=g(n); best-first search reduces reduced to depth-first search by, for example, setting f(n)=-(number of nodes from start state to n) (thus forcing deep nodes on the current branch to be searched before shallow nodes on other branches).

c. Uniform-cost search is a special case of A∗ search.

A* Search reduces to uniform-cost search when the heuristic function is zero everywhere, i.e. h(n)=0 for all n.

9. The heuristic path algorithm is a best-first search in which the evaluation function is

f(n) = (2 − w)g(n) + wh(n). What kind of search does this perform for w = 0, w = 1, and w = 2?

w=0 Uniform Cost Search, when

w=1 A* Search

w=2 Greedy best-first Search.

10. What are the pros (if any) and cons (if any) to using A* versus Uniform Cost Search?

Explain; consider both time and space.

Evaluating the heuristic in A* can take extra time, but if the heuristic is good (informed) it can cut down the number of expanded states a lot (which helps running time and space)

11. Consider the following initial and goal states of 8-puzzle:

1 2 3

8 5

4 7 6

Initial state Goal State

Trace the A* Search algorithm using the Total Manhattan Distance heuristic, to find the

shortest path from the initial state shown above, to the goal state.

1 2 3

4 5 6

7 8

Page 6: True. For example, in an environment with a single state ... · said, it can happen that there is a cycle of iterations between goal formulation, problem formulation, and problem

Page 6 of 7

12. Consider the graph shown in the figure below. We can search it with a variety of different

algorithms, resulting in different search trees. Each of the trees (labeled G1 though G7) was

generated by searching this graph, but with a different algorithm. Assume that children of a

node are visited in alphabetical order. Each tree shows all the nodes that have been visited.

Numbers next to nodes indicate the relevant “score” used by the algorithm for those nodes.

For each tree, indicate whether it was generated with

1. Depth first search

2. Breadth first search

3. Uniform cost search

4. A* search

5. Greedy Best-first search

Page 7: True. For example, in an environment with a single state ... · said, it can happen that there is a cycle of iterations between goal formulation, problem formulation, and problem

Page 7 of 7

In all cases a strict expanded list was used. Furthermore, if you choose an algorithm that uses

a heuristic function, say whether we used

H1: heuristic 1 = {h(A) = 3, h(B) = 6, h(C) = 4, h(D) = 3}

H2: heuristic 2 = {h(A) = 3, h(B) = 3, h(C) = 0, h(D) = 2}

G1: Breadth-first search G2: Greedy best first, H1 G3: A* search, H1 G4: Greedy best first, H2 G5: Depth-first search G6: A* search, H2 G7: Uniform-cost search