assisting drivers with ambient take over requests in highly automated driving
TRANSCRIPT
PowerPoint Presentation
Assisting Drivers with Ambient Take-Over Requests in Highly Automated Driving
Shadan Sadeghian BorojeniLewis Chuang Wilko HeutenSusanne Boll
1
1956 / 1957 ELECTRICITY MAY BE THE DRIVER. [Advertisement, Central Power and Light Company]
Automated DrivingGeneral Motors Motorama Exhibit 195604.11.16Sadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA3
https://www.yahoo.com/sy/ny/api/res/1.2/.oxfXaknJDdyMmgA1pj4nw--/YXBwaWQ9aGlnaGxhbmRlcjtzbT0xO3c9ODAw/http://slingstone.zenfs.com/offnetwork/4eda070e65bbdb7f38115a75abfbc7ed
Levels of Automation (NHTSA)6
Sadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16
Human vs MachineBattle of the sensors7
Sadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16
http://www.telegraph.co.uk/business/sme-library/fleet-management/driverless-cars-explained/
ford8
vigilance: monitor for unexpected eventsconcentrate: driving, monitor busy trafficswitching: different information locationsshare: other taskssuppress: inhibit unnecessary actionspreparation: initiate action proceduresgoal-setting: maintenance of objective(s)
Software of Attention?e.g. on the highway looking for an exit9 Stuss, Shallice, Alexander, & Picton (1995). A multi-disciplinary approach to anterior attentional functions. In Grafman, Holyoak, Boller (Eds.), Structure and function of the human prefrontal cortex, Annals of New York Academy of Sciences, 279, 191--211
Sadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA
1. when driving, the driver has to be vigilant to respond for unexpected events that occur rarely, such as the appearance of the pot-hole2. in addition, he has to concentrate on his primary task, which is driving3. for driving itself, he might need to switch attention between the traffic ahead and looking at the mirrors for the traffic behind.4. he might also be talking to a passenger and has to manage how resources are shared between talking and driving5. naturally, he will want to look at the passenger and extra resources are required to prevent this unhelpful behavior during driving6. when he notices the sign for the highway exit, he will need resources to prepare the actions for an exiting-highway manoeuvre. this requires resources also.7. Throughout all of this, he has set himself a goal, namely getting to a specific place, and will need to remind himself of this goal constantly. This requires resources also.9
vigilance: right lateral mid-frontal regionsconcentrate: anterior cingulateswitching: dorsolateral prefrontal cortexshare: orbitofrontal and anterior cingulatesuppress: bilateral orbitofrontal areaspreparation: pre-motor cortexgoal-setting: dorsolateral prefrontal cortexSoftware of Attention?e.g. on the highway looking for an exit10 Stuss, Shallice, Alexander, & Picton (1995). A multi-disciplinary approach to anterior attentional functions. In Grafman, Holyoak, Boller (Eds.), Structure and function of the human prefrontal cortex, Annals of New York Academy of Sciences, 279, 191--211
Sadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA
1. when driving, the driver has to be vigilant to respond for unexpected events that occur rarely, such as the appearance of the pot-hole2. in addition, he has to concentrate on his primary task, which is driving3. for driving itself, he might need to switch attention between the traffic ahead and looking at the mirrors for the traffic behind.4. he might also be talking to a passenger and has to manage how resources are shared between talking and driving5. naturally, he will want to look at the passenger and extra resources are required to prevent this unhelpful behavior during driving6. when he notices the sign for the highway exit, he will need resources to prepare the actions for an exiting-highway manoeuvre. this requires resources also.7. Throughout all of this, he has set himself a goal, namely getting to a specific place, and will need to remind himself of this goal constantly. This requires resources also.10
Assisting Takeover Situations11How can we support a drivers ability to switch from engaging with a non-vehicle-handling task to monitor and/or resume the complex maneuvers that constitute effective vehicle handling?
Attention disengagement from non-driving task
Shifting attention to manual driving task
switching: different information locationspreparation: initiate action proceduressuppress: inhibit unnecessary actionsSadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16
11
Experiment
Audio-Visual Take-over RequestGoalShift attention from the secondary task to the driving taskCommunicate the driving environment and upcoming taskPrepare appropriate maneuver
Attention disengagement from non-driving task
Shifting attention to manual driving task
Shift drivers attention Shift drivers attentionand provide context
13
Design: Take-over Requests 14
Ambient light displays can be effective to prime a take-over situation. Presenting contextual information as TORs to drivers, affects their performance The presentation pattern of the light cues affects drivers performance
Sadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16
first, we show that while having audio cues to prime userswith the urgency of take-over situation, locating the visualcue in the periphery (namely, a peripheral light display) canreduce mental workload and assist safe maneuvers second, our designed light display can convey contextualinformation to assist steering at take-over situations third, using different light patterns for presenting contextualinformation can have an effect on driving behavior.14
RGB LED StripTablet PCEyetracker20 participants
A fixed-based right-hand traffic driving simulator with a field of vision of 150 was used. The simulation was created withSILAB 1 . Auditory cues were also played simultaneously from speakers built in the drivingsimulator, located behind the driver on both sides.In this study, an Adafruit NeoPixel Digital RGB LED strip with a resolution of 144 LEDs permeter was used.To reduce the intensity of the light display, theLED strip was placed in a matte white acrylic LED profile. Theframe was located on the dashboard of the driving simulator behind the steering wheel, 65 degrees from fixation on thetablet pc presenting the 1-back task, which was on the drivers laps.
To detect the eye gaze of the participants during the experiment, they were asked to wear Dikablis Glasses by Ergoneers2 . The eye-tracker was calibrated before each trial to ensure constant track of eye-gaze behavior. Two physical markerson the front panel and two virtual ones on the simulator displays were used. The calibration procedure took between 30 seconds to one minute for each trial. We used the standard eyetrackersoftware for calibration, video recording and analysis of participants eye-gaze.15
Scenario
Automated Driving
Manual Driving
5s
In 30-40 second intervals a light and an audio cue was presented as TOR, informing them of a road block (a truck with road construction signs and alerts parked on the road) on either left or the right lane. The TORs were presented at 5 seconds TTC to the road block,
16
17Primary task: Visuospatial 1-backSadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16
N-back taskAdd tablet pic17
18
Baseline
All LEDs turn onSadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16
Measures18
19
Half of the LEDs turn on for steering direction Static
Sadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16
Condition (baseline)
19
20
Moving
Half of the LEDs turn on sequentially for steering direction Sadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16
Condition (baseline)
20
21
Video demonstration04.11.16
Second trial only21
Measurements22Reaction time (RT)the time between presentation of the TOR and first steering actionTime to collision (TTC) to obstaclethe time between the lane change maneuver and collision to the road block Workload (NASA-RTLX)self-reported workload ratingsGaze behaviornumber and duration of glances at the light display when the TORs were presented
Sadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16
Measures22
Hypotheses1: Performance RTBaseline < RTStatic < RTMoving TTCmoving > TTCstatic > TTCbaseline
2: WorkloadNASA-RTLXmoving < NASA-RTLXstatic < NASA-RTLXbaseline
3: Glance Behavior on Cue G_FreqBaseline G_Freq Moving < G_FreqStatic G_DurBaseline G_Dur Moving < G_DurStatic
23Sadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16
Hypothesis23
Results
Qualitative FeedbackHaving light in the periphery together with the auditory cue, attracts attention faster to the handover task.
It saves time scanning the road and seeing what is wrong and what I have to do.
85% of the participants preferred conditions with contextual cuing (static and moving) to the baseline.
Between the static and moving lights, the moving light was preferred (71%)25Sadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16
Reaction Times (msecs)F2,38 = 7.46, p < 0.01, 2 = 0.24Bayes Factor p(H0):p(H1) =3.5826
**BaselineStaticMovingSadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16Kass, R. E., & Raftery, A. E. (1995). Bayes factors. Journal of the american statistical association, 90(430), 773-795.
Post-hoc Tukey HSD tests on both measures revealedthat both static and moving cue conditions were significantlydifferent from the baseline cue condition but not from eachother.26
Reaction Times (msecs)F2,38 = 7.46, p < 0.01, 2 = 0.24H0 is 3.58 times more likely than H1 (StaticMoving)27
**BaselineStaticMovingSadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16Kass, R. E., & Raftery, A. E. (1995). Bayes factors. Journal of the american statistical association, 90(430), 773-795.
Post-hoc Tukey HSD tests on both measures revealedthat both static and moving cue conditions were significantlydifferent from the baseline cue condition but not from eachother.27
Time to Collision to Obstacle (secs)28
F2,38 = 7.70, p < 0.01, 2 = 0.25H0 is 4.3 times more likely than H1 (StaticMoving)**BaselineStaticMovingSadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16
Post-hoc Tukey HSD tests on both measures revealedthat both static and moving cue conditions were significantlydifferent from the baseline cue condition but not from eachother.28
NASA RTLXOverall Workload (F1.89, 37.95 = 2.16 , p = 0.13)
29
28.57 16.0333.21 12.1927.14 16.32Moving < Static < Baseline Sadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16
results29
H0 is 1.3 times less likely than H1 (StaticBaseline)H0 is 1.7 times more likely than H1 (MovingBaseline)Number of glances
BaselineStaticMoving
We performeda JZS Bayesian t-tests in order to understand how themanipulated cues of static and moving compared to the baseline.In terms of number of glances, the static cue (BF01=0.21)was more likely, than the moving cue (BF01=0.49), to be differentfrom the baseline. In addition, the mean duration ofthese glances were more likely to be different for the staticcue (BF01=0.8), than the moving cue (BF01=1.66), to thebaseline.Using the labels provided by [22], we have substantial evidencethat static cues attract more glances than the baseline butonly anecdotal evidence for moving cues. Furthermore, wehave anecdotal evidence that moving cues result in glancesthat have similar duration lengths as our baseline cues, andanecdotal evidence that static cues result in longer glances.30
Glance duration
BaselineStaticMovingH0 is 5 times less likely than H1 (StaticBaseline)H0 is 2 times less likely than H1 (MovingBaseline)
We performeda JZS Bayesian t-tests in order to understand how themanipulated cues of static and moving compared to the baseline.In terms of number of glances, the static cue (BF01=0.21)was more likely, than the moving cue (BF01=0.49), to be differentfrom the baseline. In addition, the mean duration ofthese glances were more likely to be different for the staticcue (BF01=0.8), than the moving cue (BF01=1.66), to thebaseline.Using the labels provided by [22], we have substantial evidencethat static cues attract more glances than the baseline butonly anecdotal evidence for moving cues. Furthermore, wehave anecdotal evidence that moving cues result in glancesthat have similar duration lengths as our baseline cues, andanecdotal evidence that static cues result in longer glances.31
Wrap up
Findingsindicating appropriate maneuver reduces response timesincrease the safety margin for time to collision
self-reports indicate less mental demands for moving cue
moving cue is not more likely than the baselineto capture gaze
Users prefer the moving cue33Sadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16
Conclusions34Ambient light displays can be effective in shifting attention to a take-over situation.The presentation pattern of the light cues does not necessarily impair driving performance.Presenting contextual information in TORs can result in more desirable behavior.Sadeghianborojeni et al., Auto-UI 2016, Ann Arbor, MI, USA04.11.16
first, we show that while having audio cues to prime userswith the urgency of take-over situation, locating the visualcue in the periphery (namely, a peripheral light display) canreduce mental workload and assist safe maneuvers second, our designed light display can convey contextualinformation to assist steering at take-over situations third, using different light patterns for presenting contextualinformation can have an effect on driving behavior.34
Thank you for your attention
[email protected]@humanmachinesystems.org