adaptive automation for human performance in large-scale networked systems

20
Adaptive Automation for Human Performance in Large- Scale Networked Systems Raja Parasuraman Ewart de Visser George Mason University Kickoff Meeting, Carnegie Mellon University, August 26, 2008. AFOSR MURI: Modeling Synergies in Large-Scale Human-Machine Networked Systems

Upload: wanda-lowe

Post on 02-Jan-2016

54 views

Category:

Documents


3 download

DESCRIPTION

Adaptive Automation for Human Performance in Large-Scale Networked Systems. Raja Parasuraman Ewart de Visser George Mason University. Kickoff Meeting, Carnegie Mellon University, August 26, 2008. AFOSR MURI: Modeling Synergies in Large-Scale Human-Machine Networked Systems. Research Goals. - PowerPoint PPT Presentation

TRANSCRIPT

Adaptive Automation for Human Performance in Large-Scale

Networked SystemsRaja Parasuraman

Ewart de VisserGeorge Mason University

Kickoff Meeting, Carnegie Mellon University, August 26, 2008. AFOSR MURI: Modeling Synergies in Large-Scale Human-Machine Networked Systems

2

Research Goals

• Develop validated theories and techniques to predict behavior of large-scale, networked human-machine systems involving unmanned vehicles

• Model human decision making efficiency in such networked systems

• Investigate the efficacy of adaptive automation to enhance human-system performance

3

Collaborations with MURI Team Members

Cornell/MIT/Pitt

GMU CMU

All

Human-Robot Team Performance and Modeling

Human-Agent Collaboration

Scaling up to Large Networks

4

George Mason University Approach

• Conduct empirical and modeling studies of human decision making performance with multiple robotic assets

• Examine human-system performance using the Distributed Decision Making simulation (DDD Version 4) – (with Mark Campbell of Cornell)

• Examine efficacy of Adaptive Delegation Interface (ADI) with Machinetta for Human-Agent collaboration – (with Paul Scerri of CMU)

• Develop human-robot performance metrics for use in large networks

5

Joint GMU-Cornell Approach

• Examine human-system performance (1-4 person teams, multiple unmanned vehicles, using DDD) in simulated reconnaissance missions (GMU)

• Model human decision-making performance (Cornell)• Identify and quantify human “cognitive bottlenecks” (GMU and

Cornell) • Identify points for “adaptive tasking” or adaptive automation

(GMU and Cornell)• Scale up to larger networks (more UVs and agents)

Proxy Proxy ProxyTeamwork Proxies

playbook

Adaptable Automation

Invocation method

Adaptive Automation

Performance Based

Event Based

Model Based

Parasuraman (2000); Kaber & Endsley (2004); Scerri et al. (2006); Miller & Parasuraman (2007))

Adaptable/Adaptive Automation

Playbook Interface for RoboFlag

• Playbook: Enables human-automation communication about plans, goals, and methods—akin to calling “plays” from a sports team’s playbook (Miller & Parasuraman, 2007)

• Validation experiments with RoboFlag• (Parasuraman et al., IEEE SMC-Part A, 2005)• Human operator supervises multiple Blue

Team robots using a delegation interface (Playbook)

• Adapted from Cornell University• Work done under DARPA MICA Program

Methods Single operator sends a team of 4-8 robots (blue team) into opponent territory (populated by red team robots)

to locate a specified target and return home as quickly as possible

User has Playbook of automated tools to direct robots Waypoint (point and click) control (“Manual”) Automated plays (Circle offense; Circle defense; Patrol border)

User selects number of robot(s) to which plays are assigned User can intervene in robot execution of a play and apply corrective measures if necessary Red team robot tactics predictable (always offensive or defensive) or unpredictable (either offensive or

defensive)

9

Hypotheses for Efficacy of Playbook Interface

• Use of automated plays at times of user’s choosing enhances mission success rate and reduces mission completion time

• Flexible use or either automated plays or manual control allows user to compensate for “brittleness” of automation– particularly when opponent tactics are

unpredictable• Management workload associated with

delegation is only low to moderate

10

Flexible Delegation Enhances System Performance without Increasing User Workload

User Workload

0

20

40

60

80

100

Static Interface Flexible Interface

Playbook Interface Type

Subjective

Workload (NASA-

Mission Completion Time

0

10

20

30

40

50

60

70

80

Static Interface Flexible Interface

Playbook Interface Type

Mission Completion Time

(s)

Parasuraman et al., IEEE-SMC Part A, 2005

11

Playbook for Pre-Mission UCAV Planning

• User can call high-end play—e.g., Airfield Denial, or

• Stipulate the method and procedure for doing Airfield Denial by

– filling in specific variable values (i.e., which airfield to be attacked)

– what UAVs to be used

– where they should rendezvous

– stipulate

– which sub-methods and optional task path branches to be used

– Etc. Miller & Parasuraman, Human Factors, 2007

Simulation Platforms at GMU

• DDD 4.0– 1-4 person teams– Large numbers of UVs/agents

• Adaptive Delegation Interface (ADI)– Designed for planning, executing,

and monitoring UV movements

– Adaptable: High level plans can be proposed by the user and modified by the automation

– Adaptive: UVs can autonomously adjust to certain events in the scenario

12

13

Adaptive Delegation For Planning

• Delegation Interfaces: Execution– Many Human-Robot interfaces are primarily execution based– RoboFlag is an example of an execution-based delegation

interface• Delegation Interfaces: Planning

– Little prior work on real-time planning with robotic vehicles– Related work on route planning for pilots: Layton et al. (1994)

• Preliminary research under DARPA's Multiagent Adjustable Autonomy Framework (MAAF) for Multi-Robot, Multi-Human Teams (with Amos Freedy).

14

Battle Space

Robotic Operator

AdaptiveInterface

AutomatedPlanningAssistant

DoctrineChecker

Machinetta

plan

instructions

planning

feedback

planning

feedback

Plan verification

with doctrine

automated plan generation Sending instructions to vehicles plan execution monitoring

Adaptive Delegation Concept

Shared task model

15

Automated Route Planning

- Task ordering goes through all possible permutations of the given tasks (if requested) and submits to Machinetta a specific task order to be followed.

- Machinetta generates the optimized path plan to reach target locations

- Post processing makes use of Machinetta generated paths (for the SEARCH task type) and introduces loading/unloading time (for the EXTRACT task type) into plans.

Machinetta

Post Processing

Task Ordering UV1 Time Steps

Reg

ions

01

23

45

67

8

1 2 30

UV2 Time Steps

- Given target location, current location of vehicle and time, fuel, task importance and risk avoidance importance; Machinetta iterates through all possible region traversal options and converges on the best (in terms of time, fuel and risk) trajectory possible. (one such trajectory for a vehicle, 4 time steps and 9 regions is shown in figure above)

- Machinetta takes into account both user specified parameters (such as task and risk importance), as well as vehicle capabilities (such as speed and fuel), and generates plans that can implement such complex behaviors as delayed action and risk avoidance.

Plan Instantiation

Plan is given to UVs UVs carry out plan

Autonomous Behavior

Obstacle on Path UGV 1 avoids obstacle

2

Dynamic Reallocation

UGV 2 Camera Failure UGV 1 then provides view

1

1 2

2

2

21

2

21

!1 2

Adjustable Autonomy

UGV 2 asks to confirm Human responds by confirming IED presence

5

Multiagent Adjustable Autonomy Framework

Dynamic Reallocation

UGV 1 loses comms UAV assists and functions as relay station

1

4

3

1

17

The Adaptive Delegation Interface

Mission wizard & compose

Mission map

Task

lib

rary

Automated planning assistant

18

Mission Planning & Execution

Task Library

Mission Execution

Mission Compose

Mission Wizard

Mission Map

Automated Planning Assistant

Compose View

Rescue & Extract

Reconnaissance

Plays

Tasks

Super Plays

move

search

wait

avoid

extract

stop

go home

UAV Recon clockwise

UAV Recon counter-clockwise

UAV Recon

UGV Recon & Extract

Reactions

Execute planFinish plan

Time Damage Victims Overall Plan ID Iter Type Assets

Plan A 2 R&E 1 UAV, 2 UGV

Submit planModify plan

Type Message Content

Plan B 2 R&E 1 UAV, 2 UGV

45 20 5 60

55 35 4 45

Standing By

Review You should include a UAV in the plan before submission

Status New plans have been generated

Review plan

UAV 1

vehicle parameterstasks

UAV Recon

G1

MoveG1

UAV Recon

B5

-

+

SearchG1

UGV 1,2

vehicle parameterstasks

UGV Recon & Extract

G1, B5

MoveG1

SearchG1

ExtractG1

MoveB5

--UAV Recon

counterclockwise

add asset delete asset

+

2:00

Mission Parameters+

Finish plan Submit planCheck plan

2

1

3

Agent Status Panel

Task StatusRole Action

Provide Camera support

TimeIssues

Re-defining role… unknownCamera Failure

Disarm IED Moving to IED location ~10 min.None

Recon & clear area

of IEDsunknown

Task Message Center

Cannot see IED Role-reallocation…

Vehicle TimelineSensors

Talon 1

Talon 3

Units Assets

Talon Unit Alpha (2/3)MI Company

Agent Control Panel

Talon 23

Talon Unit Alpha

3 3

STOP

1

T1

T2

14:05

14:05

Options Pop-out

20

Advantages of using the Adaptive Delegation Interface

• Users can give high-level commands to a set of vehicles– No need to input each task individually– Automation can generate and finish plans– Humans can adjust plans as needed

• Users can monitor executed plans and intervene if necessary

• Minimal training needed (20-30 min.)