d5.1 test methodology and use case...

33
BRAVE BRidging gaps for the adoption of Automated VEhicles No 723021 D5.1 Test methodology and use case specification Lead Author: Alexander Eriksson (VTI) With contributions from: David P. Pancho, David Cabañeros (TREELOGIC) Anders Lindström, Niklas Strand, Henriette Wallén Warner (VTI), Alain PIPERNO (UTAC), Harald Widlroither (FHG) Reviewer: David P. Pancho (TREELOGIC) Deliverable nature: Report (R) Dissemination level: (Confidentiality) Public (PU) Contractual delivery date: 30 th November 2017 (Month 6) Actual delivery date: 30 th November 2017 (Month 6) Version: 1.0 Total number of pages: 33 Keywords: Methodology, use case

Upload: others

Post on 14-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

BRAVE

BRidging gaps for the adoption of Automated VEhicles

No 723021

D5.1 Test methodology and use case

specification Lead Author: Alexander Eriksson (VTI)

With contributions from: David P. Pancho, David Cabañeros

(TREELOGIC) Anders Lindström, Niklas Strand, Henriette

Wallén Warner (VTI), Alain PIPERNO (UTAC), Harald

Widlroither (FHG) Reviewer: David P. Pancho (TREELOGIC)

Deliverable nature: Report (R)

Dissemination level: (Confidentiality)

Public (PU)

Contractual delivery date: 30th November 2017 (Month 6)

Actual delivery date: 30th November 2017 (Month 6)

Version: 1.0

Total number of pages: 33

Keywords: Methodology, use case

Page 2: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 2 of 33

Abstract

This deliverable deals with the test methodology and use cases for the BRAVE-project. It presents the use

cases to be assessed during the project and the experimental set-up, participant recruitment procedure and

facilities to be used throughout the course of the project. Furthermore, this deliverable details a number of

measures to be collected during testing for the assessment of acceptance and trust to inform the technical and

HMI development in WP3 and WP4.

Page 3: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 3 of 33

Executive summary

Deliverable 5.1 contains a description of the test methodology and use case specification for the 6 test blocks

in Work Package 5 of the BRAVE-project.

The test sequence in BRAVE follows the V-ISO model [1] and consists of the following steps:

• Tests of already available market systems and vehicles.

• Concept and technology development.

• Test in a simulated environment.

• Test in test track.

• Test in real traffic.

The development sequence and project progression in BRAVE is arranged around this concept, with tests

being planned every 6 months throughout the project. Obviously, the testing arrangements are quite different

for each of these tests, and are therefore described in more detail in this deliverable.

The purpose of the tests conducted in BRAVE is to gauge the acceptance of automated vehicles for drivers,

passengers and vulnerable road users, based on the findings in WP2, through a combination of field trials on

test tracks and proving grounds, on the open road, and in a simulated environment. This is achieved by

utilising the collective expertise in the project, where each partner included in WP5 contributes with state of

the art facilities and expertise. There are a total of 6 test block scheduled in BRAVE, where the initial tests

will benchmark contemporary driving automation systems from a driver/passenger point of view, and VRU

point of view, followed by HMI development in a driving simulator. The finalised HMI suggestions are then

tested in a high fidelity, moving base simulator in combination with algorithms and sensor models provided

by WP3 and WP4 before moving to test tracks with dummy targets. The final stage of testing will take place

on the open road, showcasing the results of the developed HMI, algorithms and sensor platforms in an effort

to ensure Driver/passenger and VRU acceptance of such vehicles, through the use of the V-ISO model.

Page 4: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 4 of 33

Document Information

IST Project

Number

723021 Acronym BRAVE

Full Title BRidging gaps for the adoption of Automated VEhicles

Project URL www.brave-project.eu

EU Project Officer Georgios CHARALAMPOUS

Deliverable Number D5.1 Title Test methodology and use case specification

Work Package Number WP5 Title User validation through realistic testing

iterations

Date of Delivery Contractual M06 Actual M06

Status version 1.0 final ■

Nature report ■ demonstrator □ other □

Dissemination level public ■ restricted □

Authors (Partner) VTI

Responsible Author

Name Alexander Eriksson E-mail [email protected]

Partner VTI Phone +46 31 750 26 24

Abstract

(for dissemination)

This deliverable deals with the test methodology and use cases for the BRAVE-

project. It presents the use cases to be assessed during the project and the

experimental set-up, participant recruitment procedure and facilities to be used

throughout the course of the project. Furthermore, this deliverable details a

number of measures to be collected during testing for the assessment of

acceptance and trust to inform the technical and HMI development in WP3 and

WP4.

Keywords Methodology, use case, automated driving, autonomous driving, road safety

Version Log

Issue Date Rev. No. Author Change

23/10/17 0.1 Alexander Eriksson (VTI) First complete version

26/10/17 0.2 David P. Pancho

(TREELOGIC)

Add WP3 and WP4 work plan and adapt

to the document template

30/11/17 0.3 Alexander Eriksson (VTI) Further revision and clarification, some

minor restructuring

1/11/17 0.4 Alexander Eriksson (VTI) Content improvement

10/11/17 0.5 David P. Pancho

(TREELOGIC)

Revision of the document, special detail

in the bibliography, images and table

format.

13/11/17 0.6 Alexander Eriksson (VTI) Merging feedback from project partners

24/11/17 0.7 Alexander Eriksson (VTI) Merging feedback from project partners

Page 5: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 5 of 33

27/11/17 0.8 Alexander Eriksson (VTI) Revision of the document introduction,

executive summary and sub-headings

related to experimental design.

28/11/17 0.9 Alexander Eriksson (VTI) Reference formatting.

29/11/17 0.10 David Cabañeros

(TREELOGIC)

WP3 improvement and revision

29/11/17 0.11 David P. Pancho

(TREELOGIC)

Format revision, Test#4-6 improvement

30/11/17 1.0 David P. Pancho

(TREELOGIC)

Final adjustments

Page 6: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 6 of 33

Table of Contents

Executive summary ........................................................................................................................................... 3 Document Information ...................................................................................................................................... 4 Table of Contents .............................................................................................................................................. 6 List of figures .................................................................................................................................................... 7 List of tables ...................................................................................................................................................... 8 Abbreviations .................................................................................................................................................... 9 1 Introduction .............................................................................................................................................. 10 2 Use cases to be tested ............................................................................................................................... 11 3 Data collection.......................................................................................................................................... 12

3.1 Measures ........................................................................................................................................... 12 4 Participants ............................................................................................................................................... 16 5 Technical works planning ........................................................................................................................ 17

5.1 Planning for WP3 works ................................................................................................................... 17 5.2 Planning for WP4 works ................................................................................................................... 18

5.2.1 Vehicle-related use cases ........................................................................................................... 19 5.2.2 VRU-related use cases ............................................................................................................... 19

6 Tests overview ......................................................................................................................................... 21 6.1 Test#1 ................................................................................................................................................ 21

6.1.1 Slovenia Test#1L ....................................................................................................................... 21 6.1.2 Germany VRU Test#1L Automated Emergency Break (AEB) in presence of VRU, pedestrians

and cyclists ............................................................................................................................................... 23 6.1.3 Multi-country Test#1MC ........................................................................................................... 25

6.2 Test#2 ................................................................................................................................................ 26 6.2.1 Scenario ..................................................................................................................................... 26 6.2.2 Experimental design .................................................................................................................. 26 6.2.3 Hypothesis ................................................................................................................................. 26 To be determined. ..................................................................................................................................... 26 6.2.4 Dependent variables ................................................................................................................... 27

6.3 Test#3 ................................................................................................................................................ 27 6.3.1 Apparatus ................................................................................................................................... 27 6.3.2 Part 1: Driving simulation.......................................................................................................... 28 6.3.3 Part 2: Driving simulation with automated vehicles in surrounding ......................................... 29 6.3.4 Part 3: Pedestrian simulation ..................................................................................................... 29

6.4 Test#4 ................................................................................................................................................ 29 6.5 Test#5 ................................................................................................................................................ 30 6.6 Test#6 ................................................................................................................................................ 30

7 Conclusions .............................................................................................................................................. 31 References ....................................................................................................................................................... 32

Page 7: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 7 of 33

List of figures

Figure 1 Steering reaction time calculation description. ................................................................................. 13 Figure 2 Screenshot of the virtual environment for Test#1L. ......................................................................... 24 Figure 3 VTI Driving Simulator IV to be used at Test#3. ............................................................................... 28 Figure 4 Equipment proposed to be used during Test#4. ................................................................................ 30

Page 8: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 8 of 33

List of tables

Table 1 Overall technical planning .................................................................................................................. 17 Table 2 Planning for WP3 works .................................................................................................................... 18 Table 3 Planning for WP4 works .................................................................................................................... 19 Table 4 Research design Test #1 Slovenia ...................................................................................................... 23 Table 5 Research design Test #1 Germany ..................................................................................................... 24

Page 9: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 9 of 33

Abbreviations

ACC Adaptive Cruise Control

ADAS Advanced Driver Assistance Systems

AEB Automatic Emergency Braking

DMS Driver Monitoring System

DoF Degrees of Freedom

HMI Human Machine Interface

IS Integrate System

TTC Time To Collision

TET Time Exposed Time to Collision

TIT Time Integrated Time to Collision

MLP Mean Absolute Lateral Position

SDLP Standard Deviation of Lane Position

SATI SHAPE Automation Trust Index

SUS System Usability Scale

TLX Task Load Index

DBQ Driver Behaviour Questionnaire

VRUs Vulnerable Road Users

Page 10: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 10 of 33

1 Introduction

This document gives a detailed specification of the use cases to be used in testing, study design and measures

to be used throughout the project.

It must be noted that this document should be considered as a ‘living document’ as the nature of the work in

BRAVE, in accordance with the V-ISO model, requires planned work to be based on already conducted

research studies within the project. Instead of defining the nature of all the planned work outlined in the

BRAVE DOA in this document, we have instead included preliminary work plans, and left certain areas ‘to

be defined’ based on the on-going activities in BRAVE. However, the partners of the BRAVE project will

ensure that this document is kept up to date, and will be updated every 9-12 months as work progresses.

The test sequence in BRAVE follows the V-ISO model [1] and contains the following components.

• Tests of already available market systems and vehicles.

• Concept and technology development.

• Test in a simulated environment.

• Test in test track.

• Test in real traffic.

The development sequence and project progression in BRAVE is arranged around this concept, with tests

being planned every 6 months throughout the project. Obviously, the testing arrangements are quite different

for each of these tests, and are therefore described in more detail in this deliverable.

Page 11: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 11 of 33

2 Use cases to be tested

BRAVE is considering 5 different use cases to be tested, proposed by UTAC :

1. Automated Emergency Break (AEB) in presence of VRU, pedestrians and cyclists

2. Automated parking in case of VRU or pedestrians proximity

3. Automated Driving (AD) in case of aggressive entering vehicles

4. AD in case of changing and difficult perception conditions (e.g. tunnels)

5. AD in case of manoeuvres and transitions at obstacles

For the definition of the AEB use case, different scenarios have been defined, based on the Euro NCAP

protocols and regulation risks from the UTAC point of view. These scenarios deal with Automatic

Emergency Braking in 6 different situations involving cyclists and pedestrians, either isolated or in groups.

For the test track experiments, the vehicle will be equipped with Velodyne (32-layer 360-degree-field-of-

view laser) and a 4-layer SICK laser located in the frontal part of the car, as the main sensors to detect the

presence of VRUs in the surrounding of the ego-vehicle. The tests will be conducted using dummies, in order

to avoid risks. The automated vehicle will be presented with dummies emerging in different numbers under

different circumstances, and the vehicle will have to react appropriately in all cases in order to intervene

when necessary and not to intervene when not needed. For initial testing and HMI design, the VR cave at

Fraunhofer (cf. Section 6.1.2), and a pedestrian simulator currently under construction at VTI will be used.

For the Automated Driving (AD) Use Cases, the following scenarios have been defined for BRAVE:

• Automated parking with pedestrian proximity: as in the previous set of use cases (AEB), the vehicle

will be equipped with Velodyne and SICK laser as the main sensors to detect pedestrians in the

proximity of the ego-vehicle. The automated car will execute an automated parking manoeuvre when

ordered by the driver. The parking manoeuvre can be automatically interrupted or paused if a

pedestrian approaches the vehicle entering the drivable area while parking. If the danger disappears

(the pedestrian moves away), the automatic parking manoeuvre will resume.

• Merging vehicles in traffic jam: the goal of this use case is to test the ability of automated vehicles to

deal with safe and soft merging with vehicles entering the highway from a ramp (especially those

driving aggressively). The automated vehicle will use radar, Velodyne, and G5 communications as

the main sensors to detect the merging vehicle. Upon the detection the automated vehicle will

execute the appropriate merging manoeuvre accounting for the position, velocity, and acceleration of

the vehicle entering the highway.

• Tunnels and suddenly difficult perception in traffic jam: in this use case, a number of vehicles will

drive in parallel while approaching a tunnel. Before entering the tunnel, some of the vehicles will

change lane, due to poor perception conditions, from the left-most to the centre lane. The automated

vehicle will react accordingly, implementing the most appropriate manoeuvre (velocity profile) in

terms of safety and comfort. Radar and Velodyne will be the main sensors used in these tests to

account for other vehicles positions and velocities.

• Manoeuvres and transitions at obstacles in traffic jam: the reaction of the automated vehicle in

complex traffic situations will be tested in this use case. Thus, the ego-vehicle will face an obstacle

on its lane (a stopped car, simulating a technical failure). The ego-vehicle will then assess whether or

not there is free space on the adjoining lane to accomplish a safe lane change manoeuver. In the

positive case, the automated vehicle will accomplish an automatic lane change. Otherwise, the

automated vehicle will diminish speed until coming to a full stop if necessary. In order to ease the

implementation of the use case, the identification of the failing vehicle (obstacle on the lane) will be

done using a communication link. The detection of other vehicles on adjoining lanes will be carried

out using radar and Velodyne.

Page 12: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 12 of 33

3 Data collection

This section outlines the different data to be collected for assessment of driver/passenger/VRU acceptance

and trust, as well as performance metrics from the vehicle, and driving simulators.

3.1 Measures

Objective measures based on the SAE J2944 standard [2]:

• Time Exposed Time to Collision (TET) is the duration of time over which the time to collision

measure is below some undesired threshold. TET is a more safety-relevant measure than Time To

Collision (TTC) alone because it considers exposure time [2]. There are a number of threshold

values suggested for the TTC. A TTC threshold for TET of 4 seconds have been suggested [3-5];

furthermore, a suggestion for a minimum TTC threshold of 3.5 seconds has been suggested by,

Hogema and Janssen [6] for drivers without ACC, and 2.6 seconds for drivers with ACC.

• Time Integrated Time to Collision [TIT, 7](TIT) is the time interval, usually measured in seconds,

over which the time to collision is less than some undesired threshold weighted by how far below

that threshold the time to collision is at each moment [2].

𝑇𝐼𝑇∗ = ∑ ∫ [𝑇𝑇𝐶∗ − 𝑇𝑇𝐶𝑖(𝑡)]𝑑𝑡𝑇

0

𝑁

𝑖=1

∀0 ≤ 𝑇𝑇𝐶𝑖(𝑡) ≤ 𝑇𝑇𝐶∗

The first instance the term TIT is used in a document, the value greater than which TTC is ignored

shall be reported. Generally the values of TIT are highly correlated with TTC [2]. These measures

are used in the collision-warning context and can help the designer pick warning thresholds.

• Distance Headway: Longitudinal distance along a travelled way, usually measured in feet or meters,

between two vehicles measured from the same common feature of both vehicles (for example, the

front axle or front tire contact patch, the front bumper, the leading surface of both vehicles, or the

trailing surface. Note: The distance gap and distance headway differ by one “vehicle” length, which

depending on the location of the reference point, can be the length of the lead vehicle (if the leading

surface is used), the length of the following vehicle (if the following surface is used), or some

combination of them (for example, if the front axle is used).The first instance the term distance

headway is used in a document, the two vehicles in the distance headway measurement and the value

above which distance headway is ignored, if any, shall be reported [2].

• Time Headway Time interval, usually measured in seconds, between two vehicles measured from

the same common feature of both vehicles (for example, the front axle or front tire contact patch, the

front bumper, the leading surface of both vehicles, or the trailing surface. [2].The first instance the

term time headway is used in a document, the two vehicles in the measurement and the value above

which time headway is ignored, if any, shall be reported.

• Mean absolute lateral position (MLP), this metric describes lane keeping accuracy and is

calculated in the following way:

𝑀𝐿𝑃 = |∑ 𝑑𝑖

𝑛𝑖=1

𝑛|

where di is the distance measured from the centre of the vehicle to the lane centre.

Page 13: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 13 of 33

• Standard Deviation of Lane position (SDLP) is a metric of the variability in lane positioning and

is calculated in the following manner:

𝑆𝐷𝐿𝑃 = √1

𝑁 − 1∑(𝑥𝑖 − �̅�)2

𝑁

𝑖=1

𝑥𝑖 = 𝑡ℎ𝑒 𝑖 − 𝑡ℎ 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑙𝑎𝑛𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛

�̅� = 𝑚𝑒𝑎𝑛 𝑙𝑎𝑛𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛 𝑜𝑓 𝑡ℎ𝑒 𝑠𝑎𝑚𝑝𝑙𝑒

𝑁 = 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑑𝑎𝑡𝑎 𝑝𝑜𝑖𝑛𝑡𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑠𝑎𝑚𝑝𝑙𝑒

Kircher, et al. [8] report 0.2 m as being a typical value for SDLP for an alert driver. Mullen, et al. [9]

found that the SDLP was 0.38 m for a control condition, and 0.30 m when a lane-departure-warning

system was provided in a rural driving condition.

• Steering reaction time is the reaction time between a first conscious steering input over a certain

threshold. In studies of automated driving a threshold of 2 degrees is common and therefore

recommended for comparability. Such metrics tend to have a log-normal distribution and descriptive

statistics and statistical tests should therefore use median and quartiles as a substitute for mean and

standard deviation [10]. The below figure shows a number of versions of the steering reaction time

measure, assessing different aspects of lateral movement behaviours.

Figure 1 Steering reaction time calculation description.

Subjective measures:

• SHAPE Automation Trust Index (SATI) – The purpose of SATI is to provide a human factors

technique for measuring human trust in ATC systems. The measure is primarily concerned with trust

in ATC computer-assistance tools and other forms of automation support, which are expected to be

major components of future ATM systems. It covers the following areas reliability, accuracy,

Page 14: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 14 of 33

understandability, confidence, liking and robustness. [11]. The internal consistency of the new SATI

is α =0.83. Owing to its conciseness, SATI should also be mainly used for an initial assessment. For

more detailed analyses of trust, it is recommended to conduct an interview in which the various

facets of trust as well as the reason for trust and mistrust are examined. The respondent answer six

questions on the SATI questionnaire. The responses to the items are collected on a seven-point

Likert scale ranging from “never” to “always”. These responses are pooled with those of other

respondents and scores are obtained by simply added together and mean obtained per scale for all the

questionnaires. The pooled data is arranged by scales.

• Van Der Laan Technology Acceptance Scale - A technology acceptance questionnaire [12] is used

to measure the usefulness and satisfaction of the system being tested. The usefulness score is

determined across the following items on a semantic-differential five-point scale: useful–useless,

bad–good, effective–superfluous, assisting–worthless, and raising alertness–sleep inducing. The

satisfaction score is determined by four items (2, 4, 6, 8) the usefulness score by the remaining five.

• System Usability Scale – The System Usability Scale (SUS) provides a “quick and dirty”, reliable

tool for measuring the usability. It consists of a 10 item questionnaire with five response options for

respondents; from Strongly agree to Strongly disagree [13]. For all even numbered items of the SUS

the respective score is the scale position minus 1. Since all uneven numbered items are reverse

phrased, the respective score for these items is 5 minus the value of the scale position. All item

scores are then summed up, resulting in a score ranging from 0 to 40. This score is multiplied by 2.5

and delivers the final SUS score, which ranges from 0 to 100. The better the score, the better the

perceived usability of the system.

• Nasa Raw-Task Load Index - The NASA raw TLX was used to evaluate the perceived workload

[14, 15] The questionnaire consists of six items: mental demand, physical demand, temporal

demand, performance, effort, and frustration. The items have a 21-tick Likert scale, ranging from

“very low” to “very high”, except the performance-scale which ranges from “perfect” to “failure”.

Overall workload score was calculated through the summation of sub-scales [14, 15].

• Arnett Invertory of Sensation Seeking (AISS) – The AISS was designed to assess sensation

seeking as a personality trait. It is assumed that sensation seeking contributes to risk preferences..

Sensation seeking is defined as the need for novel and intense stimulation [16]. The questionnaire

consists of 20 questions using 4 point scales utilising 2 sub-scales called Novelty, and Intensity.

• The mini Driver Behavior Questionnaire - The Driver Behavior Questionnaire (DBQ) [17] is a

self-report instrument used to assess how often drivers perform aberrant drivers behaviors in traffic.

It measures three behavioural categories namely; violations, errors and lapses. The difference

between these categories is that violations are deliberate acts, errors are acts that fail to get the

intentional outcome, and lapses are unintentional acts. The DBQ has proved a useful tool for

predicting self-reported accident involvement. Bivariate correlations between factor scores obtained

from Reason’s 27-item version and from the ‘mini’ 12-item version revealed that, at each time-point,

the short version accurately reproduced the full version. [18].

• The Driver Skill Inventory (DSI) – The DSI is a 13 item questionnaire which asks drivers to

approximate their performance, in terms of being better, worse, or as good as ‘the average driver’

[19]. This questionnaire provides an indication of a drivers perceived driving ability, which may

correlate with willingness to, and trust in automated driving systems.

• The Van Westendorp Price-Sensitivity Meter - The PSM is intended to provide cues for optimal

pricing for novel technology. The PSM does not ask for a single value but rather for four different

price points [20]:

Page 15: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 15 of 33

o At what price do you consider the product to become inexpensive but you would still consider

it to be a bargain? (Cheap)

o At what price do you consider the product to become expensive but you would still consider

buying it? (Expensive)

o Above what price would the product become too expensive so that you would not consider

buying it? (Too expensive)

o Below what price would the product become so inexpensive that you would doubt its quality

and not consider buying it? (Too cheap)

The responses to these questions are prices, which are then plotted cumulatively for the analysis stage. PSM

creates two curves that are not cheap, and not expensive which are the reversed values of cheap and

expensive. A total of 6 curves are plotted to identify four critical price points based on the curve intersections

in the graph. This is then used to calculate an approximate acceptable price range [21].

Page 16: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 16 of 33

4 Participants

Participants will be recruited as set out in deliverable D8.1 and D8.2 and will be thoroughly informed of any

risks associated with their participation, before being offered to provide informed consent to commence

participation in the study. After the study the participants will receive a full debriefing. The study will

comply with the American Psychological Association Code of Ethics [22].

Page 17: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 17 of 33

5 Technical works planning

This section presents the planning for the research and development activities in WP3 – “Vehicle-driver

interaction and driver monitoring concepts” and WP4 -“Vehicle-environment interaction concepts,

enhancing ADAS”. Thereby the interrelations between WP2, 3, 4 and 5 are specified.

The plan describes the different experiments and use cases that will be carried out during the

experimentation phase on months 12, 18, 24, 30, and 36, respectively, both in simulator and DRIVERTIVE,

the autonomous vehicle of the University of Alcalá. The experiments provided constitute a selection of the

possible use cases proposed by EURO NCAP, as described in the BRAVE proposal.

As a summary, the following table gives a brief idea of the overall planning, which is detailed in the

following subsections:

Table 1 Overall technical planning

TIMING M6 M12 M18 M24 M30 M36

TESTS

SIMULATORS TEST DRIVERTIVE

Test#1 Test#2 Test#3 Test#4 Test#5 Test#6

DRIVER

MONITORING

DEVS

Beta version Stable version Final version

HMI DEVS Beta version Stable version Final version

INTEGRATION

WP3 Beta version Final version

WP4 Stable version

1

Stable version

2

Final

version

The maturity level of the developed systems (DMS and HMI) is defined as follows:

• Beta version. Including early design and prototype-based demonstrators based on findings from the

review of state of the art approaches. No integration is expected at this level.

• Stable version. Functional version of the system, based on a formal software architecture and

hardware setup. Includes integration at functional level within DMS and HMI systems.

• Final version. Includes the refined software implementation, based on the results from the stable-

version evaluation. Both DMS and HMI systems are completely integrated between them and within

the platform in real-time.

5.1 Planning for WP3 works

In this chapter the joint roadmap concerning T3.3, T3.4, and T3.5 is described. Within WP3 virtual

prototypes for demonstration of vehicle-driver interaction and driver monitoring concepts are iteratively

developed, tested and improved. These prototypes are then evaluated in the respective pilot sites. Based on

test results, the concepts are then refined and improved.

Page 18: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 18 of 33

Table 2 Planning for WP3 works

TIMING M6 M12 M18 M24 M30 M36

TESTS

SIMULATORS TEST DRIVERTIVE

Test#1 Test#2 Test#3 Test#4 Test#5 Test#6

TREE DSM-v0 DSM-v1 DSM-v2

FHG HMI-v0 HMI-v1 HMI-v2

INTEGRATE

WP3 IS-v0 IS-v1

TREE will start working on the first version of the Driver Monitoring System (DMS) from M9. Based on the

findings of the survey in WP2, HMI concepts for vehicle-driver-interaction are derived and iteratively

improved in a user-centred approach, resulting in a prototype version for the second pilot tests in M12.

Concurrently, the first release of the subsystems (DMS-v0 and HMI-v0) will take place in time for the

second pilot test. This version will be based on existing approaches from the state of the art, leading us to

have a clearer picture of these subsystems within the overall project, taking the opportunity for testing it

within the simulator. This version will be considered as base point for the upcoming development.

Additionally, an extensive analysis on the sensor hardware solutions (commercial or custom-made) will be

performed, in order to build the equipment specification.

The next version of the DMS (DMS-v1) and HMI (HMI-v1) subsystem will be provided for its testing and

validation for the third pilot stage (M18). This includes a fully-functional platform implementing driver

monitoring concepts, including both hardware and software elements. Additionally, for easing its integration

with the HMI system, both TREE and FHG will work together during these months in the definition and

implementation of the systems’ interfacing. This results in an Integrated System (IS) consisting of both HMI

and DDE concept (IS-v0).

Finally, based on the results from the tests in pilot Test#3, the DMS-v2 version will implement a refined

version of the system in order to i) improve the functionalities already developed for the v1 and ii) include

cutting-edge techniques for the most recent state of the art, evolving in line with the expected innovations

from the academia and industry. All these improvements will be included as part of the final system

according to their maturity level.

At Test#4 the most promising HMI elements are realized in the test vehicle, resulting in HMI-v2. This

version might differ from version 1 in so far that it is restricted to use cases that can be tested with

DRIVETRIVE prototype on the test track. Furthermore the HMI concept is – if necessary - adapted to

technical restrictions due to the integration in the prototype. A mature and robust integrated system is

provided at Test#4 with all the improvements included (IS-v1).

5.2 Planning for WP4 works

BRAVE will test a number of use cases related to vehicles and vulnerable road users, mainly pedestrians, as

described by the recommendations issued by EURO NCAP. A selection of those use cases will be

implemented and tested during the experimentation phase of the project. In all cases, the selected use cases

deal with anticipated vehicle behaviour in order to enhance safety when interacting with other vehicles or

with VRUs (pedestrians and cyclists). The EURO NCAP uses cases are devised for testing systems aiming to

enhance current ADAS or even to provide advanced operation of self-driving cars in complex situations. The

criteria used to select the use cases to be tested is based on the requirement of increasing user acceptance of

self-driving cars by exhibiting advanced functionality beyond the current state-of-the-art. The description of

the different use cases for vehicles and VRUs, together with the preliminary planning for conducting the

experimentation phase is provided in the following sections.

Page 19: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 19 of 33

Table 3 Planning for WP4 works

TIMING M6 M12 M18 M24 M30 M36

TESTS

SIMULATORS TEST DRIVERTIVE

Test#1 Test#2 Test#3 Test#4 Test#5 Test#6

WP4

Test use cases

VEH-1 and

VEH-2 v1

Test use cases

VEH-1 and

VEH-2 v1

Test use

cases VEH-

1 and VEH-

2 v1

5.2.1 Vehicle-related use cases

In this section, a preliminary description of the selected EURO NCAP use cases for interaction with other

vehicles is provided. These use cases consider complex situations on highways and will be emulated on

proving grounds in France and Spain.

• Use Case VEH-1: a car enters the highway from the ramp aggressively while the ego-vehicle drives

along the right-most lane. The ego-vehicle must be able to sense the entering vehicle using some of the

on-board sensors, such as radar and laser, or receive the intentions of the oncoming vehicle by means of

a communications link, and to react appropriately by giving way, change lane or whatever that is

deemed appropriate as a function of the situation (relative velocity and acceleration, intersecting

trajectories, safety gap, etc.). The ego-vehicle will continue without changing its speed or without

performing any lane change manoeuvre in case there is no conflict between its predicted trajectory and

the predicted trajectory of the vehicle entering the highway.

Use Case VEH-2: a preceding car is changing lane with no prior signalling while infringing the safety gap

with respect to the ego-vehicle. The relative velocity between the lane-changing car and its

predecessor along its lane is very high. The adjoining lane of the lane-changing car has sufficient free

space. All the conditions for a lane change manoeuvre are met. Those conditions should be anticipated

by the ego-vehicle, so that it can react in an anticipated manner (for example, by decreasing velocity or

by changing lane if possible in order to create a safe situation for all cars involved in the scene).

5.2.2 VRU-related use cases

In this section, a preliminary description of the selected EURO NCAP use cases for interaction with VRUs is

provided. These use cases consider complex situations on urban and road scenarios. As in the case of the

vehicle-related use cases, VRU-related use cases will emulated on proving grounds in France and Spain

using dummies.

• Use Case VRU-1: a pedestrian steps onto the street, continues to walk and cuts the ego-vehicle

trajectory. The ego-vehicle must be able to detect the crossing intention in an anticipated manner and act

accordingly. The car will decrease speed significantly (coming to a full stop if necessary) and signal the

pedestrian by switching on the GRAIL interface.

o Similar performance should be attained if the system detects that the pedestrian is keeping

eye contact with the driver.

o The car should decrease velocity in a preventive manner if the system detects that a

pedestrian is walking along the sidewalk in parallel to the road, even if no intention to cross

is detected (but there are chances that it might happen).

Page 20: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 20 of 33

• Use Case VRU-2: a pedestrian crosses the street all of a sudden intersecting the ego-vehicle’s

trajectory, creating a dangerous situation for the pedestrian and for the car’s occupants. The car should

be able to perform an automatic emergency braking (AEB) manoeuvre. Only if it is safe enough, the car

could perform an avoidance manoeuvre. Use Case VRU-3: the system detects a cyclist in front of the

ego-vehicle. The car should overtake the cyclist while maintaining an appropriate safety distance (lateral

distance) and reducing the speed accordingly. The overtaking manoeuvre must be performed only if

there is free space along the ego-lane and the adjoining lane (no oncoming traffic). Otherwise, the car

must reduce speed and stay behind the cyclist until the conditions for a safe overtake are met.

o

o

6

Page 21: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 21 of 33

6 Tests overview

The following section gives information of each of the six tests individually.

6.1 Test#1

This test intends to obtain early feedback about existing market systems, including their characteristics,

interaction tools and areas of improvement.

6.1.1 Slovenia Test#1L

In all cases, all vehicles (ego-vehicle and interacting vehicle) must be driven by professional drivers due to

safety and insurance reasons. The purpose of these tests is to assess the trust and acceptance of the

participants (vehicle passenger) of the automated vehicles performance, as well as the technical performance

of current ACC and lane change assist technologies in the two proposed scenarios and to assess which

additional functionalities and performance the BRAVE project can contribute.

6.1.1.1 Scenarios

6.1.1.1.1 Scenario 1: Unexpected lane change

A vehicle cuts-in the ego-vehicle trajectory in an unexpected manner (due to difficult perception conditions,

obstacles on the lane, etc.). The ego-vehicle must be equipped with ACC and lane-change assist technology

(if possible).

Adaptation: This test will focus on testing adaptive cruise control and lane change assist. 2 test drivers will

be involved one in a VW Arteon and one in a car without such systems. Arteon will be driving using ACC

and drive on one lane, on the same lane in front of Arteon another car will be driving and changing speeds.

With this the driver can observe this function in action since Arteon will adapt the speed with ACC. This test

will conclude in one round on the designated area on polygon.

(sub)-Systems to be tested: Adaptive Cruise Control ACC with predictive cruise control

Adaptive Cruise Control ACC with predictive cruise control, used for the first time in the Volkswagen

Arteon, automatically adapts the vehicle speed to the speed of the vehicle ahead up to the pre-set maximum

speed (maximum 210 km/h) and maintains a pre-selected distance from it as well. The new adaptive cruise

control is predictive because it is capable of automatically altering the speed of the Arteon to recognise any

peculiarities on the route or speed limits in a predictive manner.

Before starting a new journey or joining a motorway, for example, the driver selects the required maximum

speed and the minimum distance to the vehicle in front. The new adaptive cruise control in the Arteon

enables a maximum speed of up to 210 km/h to be set. In conjunction with the "Sign Assist" 2.0 traffic sign

recognition system's front camera, adaptive cruise control in the Arteon even reacts to stationery vehicles,

such as the end of a traffic jam or in town at speeds of up to 50 km/h - within the system's limits. Based on

these measured results, adaptive cruise control maintains the required distance to the vehicle in front by

automatically braking or accelerating up to the pre-set maximum speed. As soon as the vehicle ahead

accelerates above the selected maximum speed or moves into the right lane ("free lane"), adaptive cruise

control accelerates the vehicle to the set maximum speed. When overtaking, adaptive cruise control

automatically begins to accelerate the vehicle as soon as the turn signal is activated. The driver of the Arteon

can override adaptive cruise control at any time with the accelerator and accelerate faster. Using the brake

pedal immediately deactivates adaptive cruise control. All messages from adaptive cruise control appear on

the display.

The new Adaptive Cruise Control ACC with predictive cruise control is used for the first time in the Arteon.

Predictive in this context means that the adaptive cruise control system now autonomously and automatically

adopts a predicted speed control - if need be, below the pre-set maximum speed, despite the road ahead being

clear. On the one hand, it takes into account information from the traffic sign recognition system and, on the

other hand, predictive route data from the navigation system. Predictive adaptive cruise controls also

communicates with "Sign Assist" 2.0 traffic sign recognition and its front camera, as well as with the map

material of the navigation system, to autonomously maintain the applicable speed limits on the road below

Page 22: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 22 of 33

the pre-set maximum speed. The speed assistance system adjusts the driving speed to the relevant speed

limits - cornering assistance regulates the speed depending on the route and ensures comfortable driving

around bends.

6.1.1.1.2 Scenario 2: Automated Driving (AD) in case of aggressive entering vehicle

Aggressive entering the highway from the ramp. The ego-vehicle (ACC-equipped) should react appropriately

by adapting its own speed (or performing a lane change).

Adaptation: Test would be done at speed 1-60km/h and is adapted to what the testing car is actually capable

of doing. The test person will be driving the Arteon at speed 60km/h on main lane while another vehicle will

proper safety equipment will drive in front and break due traffic jam which will cause Traffic Jam Assist

function to react and stop the car. The other vehicle will be driven by one of our instructors and will be

accompanied by one of the test person. We will perform preliminary testing at Vransko (19-21st December)

and adapt accordingly to what we notice on spot.

(Sub)-System to be tested:

Traffic Jam Assist

Traffic Jam Assist enables semi-automatic driving with the Arteon in traffic jams when travelling at up to 60

km/h: it can react to moving objects and takes over steering, acceleration and braking functions. Traffic Jam

Assist is only available in conjunction with the DSG dual clutch gearbox, adaptive cruise control and "Lane

Assist" lane departure warning system, as it relies on their functions to work. The system is enabled within a

speed range of 0 to 60 km/h. The radar sensors built into the front of the Arteon monitor the road, while the

camera in the base of the interior mirror also records the markings on the road. By connecting the data

collected in this process, the system automatically keeps the vehicle at a set distance from the vehicle in front

while also keeping it in lane. In the event of stop-and-go traffic, Traffic Jam Assist brakes the vehicle - even

bringing it to a complete stop. The vehicle will even pull away again without the driver having to do

anything, with the system automatically controlling the accelerator, brake pedal and steering, allowing the

car to follow the traffic. Despite all the convenience offered by Traffic Jam Assist, the driver must keep his

hands on the steering wheel at all times, after all he is always responsible for the Arteon and can actively

override the system at any time.

6.1.1.2 Experimental design Purpose

The experimental design may be considered ‘longitudinal’ as there are no comparisons of conditions, but

rather the recalibration of trust from reading a description of the ADAS feature, after experiencing the first

ADAS system, and after experiencing the final ADAS system.

6.1.1.3 Purpose

The main purpose of Test #1 is to establish a baseline for trust and acceptance of contemporary automated

vehicles for the rest of the activities in BRAVE. Additionally, it will provide data that could be used as a

‘meta-analysis’ of all the experiments carried out in BRAVE as the same subjective variables will be

collected in all experiments. Additionally, the data collected may be utilised in dissemination activities as

part of BRAVE.

6.1.1.4 Dependent variables

6.1.1.4.1 Objective measures

A production vehicle will be used in this test, this means that any access to CAN would be restricted to that

of ISO 11898-1:2015, requiring substantial work to acquire data with little practical use for the sake of this

study, thus logging of vehicle data will be postponed until testing can be done with the BRAVE prototype

vehicle.

Page 23: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 23 of 33

6.1.1.4.2 Subjective measures

• SHAPE Automation Trust Index (SATI)

• Van Der Laan Technology Acceptance Scale

• System Usability Scale

• Nasa Raw-Task Load Index

• Arnett Invertory of Sensation Seeking (AISS)

• The mini Driver Behavior Questionnaire

• Generic willingness to use technology scale (trust in tech, confidence etc.)

6.1.1.5 Procedure

The procedure for the two test cases for Test #1 Vransko described in section 6.1.1.1 is summarized in the

table below.

Table 4 Research design Test #1 Slovenia

Task Estimated time (minutes)

Start block Participant arrival, greetings and preliminary questionnaires,

consent forms and study information. 15

transport to test site (on track) 5

familiarisation of vehicle features 5

Block 1 scenario description 3

Test case S1: Unexpected lane change 10

Questionnaires 5

Block 2 scenario description 3

Test Case S2: Aggressive entering the highway from the

ramp 10

Questionnaires 5

End block Transport off test track 5

Debrief 5

TOTAL 71

6.1.2 Germany VRU Test#1L Automated Emergency Break (AEB) in presence of VRU,

pedestrians and cyclists

Due to the immediate risk to participants in the VRU category in assessing the AEB use-case in BRAVE

using contemporary systems, it was decided to move this part of Test#1L to Fraunhofer’s Virtual Reality

cave where the scenario could be assessed without risk to VRU participants and vehicle drivers. Moreover,

as the VRU test case for the AEB system is being held at Fraunhofer, Germany, it was also decided that

instead of recruiting additional participants to carry out the automated parking use case in Vransko, this use

case will also be assessed at Fraunhofer, Germany.

6.1.2.1 Automated Emergency Break (AEB) in presence of VRU, pedestrians and cyclists

The AEB scenario will be carried out in the Fraunhofer Virtual Reality Cave, where an intersection will be

modelled with an automated vehicle approaching the intersection whilst a participant is crossing the road in

the virtual environment. The vehicle will approach the pedestrian, and activate the AEB system, avoiding a

collision. Participant trust and acceptance of the automated vehicle will be assessed in accordance with the

measures specified in 6.1.2.3.

Page 24: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 24 of 33

6.1.2.2 Apparatus

The test will be carried out with an HTC VIVE. The participants can move in an area of about 3*4 meters in

a virtual environment, while wearing the VR device. In Figure 2 a screenshot of the environment is depicted,

which displays a possible situation for the AEB test.

Figure 2 Screenshot of the virtual environment for Test#1L.

6.1.2.2.1 Automated parking in case of VRU or pedestrian proximity

In the Automated Parking scenario a BMW i3 will be utilised as the testing vehicle. The vehicle will carry

out a semi-automated parallel parking manoeuvre, where a driver is responsible for the throttle and the brake

of the vehicle. A VRU (pedestrian) will stand by the free parking bay, observing the vehicle behaviour and

rate their trust in such a system using some of the measures stated in 6.1.2.3.

6.1.2.3 Dependent variables

• SHAPE Automation Trust Index (SATI)

• Van Der Laan Technology Acceptance Scale

• Nasa Raw-Task Load Index

• Arnett Invertory of Sensation Seeking (AISS)

• Generic willingness to use technology scale (trust in tech, confidence etc.)

6.1.2.4 Procedure

Table 5 Research design Test #1 Germany

Task

Estimated time

(minutes)

Start block

Participant Arrival, greetings and preliminary questionnaires

Consent forms and Study information 15

Block #1

Introduction to VR Cave 5

AEB Test scenario 15

Post-questionnaires 5

Block #2

Scenario Description 7

Pre-questionnaires

Automated Parking Scenario 15

Post-questionnaires 3

Page 25: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 25 of 33

End Block Debrief 5

TOTAL 70

6.1.3 Multi-country Test#1MC

The multi-country test will generate cross-cultural input from stakeholders relevant to the BRAVE project.

The MC test will be carried out at the partner organisations utilising vehicles available for use for the

individual partner organisations. This will likely generate data from a variety of vehicle makes and models.

Furthermore, the Advanced Driver Assistance Systems (ADAS) used in the MC test are likely to be varying,

depending on local regulations and vehicle models. Data from participants will be collected pre- and post-

drive utilising the measures detailed in 6.1.3.5.

6.1.3.1 Scenario

Cross-cultural input from stakeholders and other users will be gathered in a multi-country (MC) test,

arranged in a distributed fashion with one test for each participating country in BRAVE. These tests will

involve non-expert drivers using regular production vehicles in real traffic at each site (where regulations

permit, otherwise the test will be held at a closed off test track), which means that variables such as vehicle

type and model, road stretch, road type, speed profile, traffic environment and traffic intensity may differ

(systematically) between sites.

Participants will be invited to drive an ADAS equipped vehicle, e.g. Volvo Pilot Assist 2, Tesla Autopilot,

Mercedes-Benz DISTRONIC Plus (with LKA) etc. The drive will take place on a motorway, which is the

likely operational design domain for future SAE J3016 Level 4 vehicles, where participants will be asked to

engage and use the ADAS feature as they see appropriate. This would capture behaviours linked to trust, and

acceptance such as removing hands from the steering wheel (further examples may be found in [23]).

6.1.3.2 Experimental design

The experimental design may be considered ‘longitudinal’ as there are no comparisons of conditions, but

rather the recalibration of trust from reading a description of the ADAS feature, and after experiencing the

ADAS system. As the experiment is run in multiple countries, an indication of international differences may

be found. Whilst the samples may be small (i.e. 10 drivers/country) it may be possible to find large

international differences with sufficient power if there is a large effect of country on trust levels [24].

6.1.3.3 Hypothesis

The hypothesis for the outcome of this experiment is that there will be an initial estimated trust and

acceptance level for drivers before experience interaction with the system (based on reading the manual of

the specific system they will be interacting with, and their pre-expectations) that will be re-calibrated after

experiencing the system (either lower or higher end acceptance, 2-tailed hypothesis.)

Research Question

Will a recalibration of trust take place between reading the vehicle manual, and experiencing the ADAS

feature?

6.1.3.4 Dependent variables

6.1.3.4.1 Subjective data

• SHAPE Automation Trust Index (SATI)

• Van Der Laan Technology Acceptance Scale

• Generic trust in technology questionnaire (UTAUT)

Page 26: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 26 of 33

6.1.3.5 Procedure

Upon greeting the participant will be taken to an appropriate site to read the study information sheet, and an

exception of the vehicle instruction manual describing the ADAS system they will use and fill out the pre-

questionnaires and the informed consent form, as well as a screening of previous ADAS experience. After

this the participant will be escorted to the testing vehicle and receive a briefing about the ADAS feature

being tested, after which the participant will proceed in driving the vehicle ‘as they see fit’ on a section of

motorway. After the drive, the participant will be asked to fill out the questionnaires relating to acceptance

and trust of the system to assess any recalibration of trust between pre- and post-drive.

6.2 Test#2

Test #2 (Stage2) will be held in Stuttgart, Germany (FHG); Month 12; the aim is to test both Sketch HMI

and Driver Monitoring System (DMS) concepts and early simulations; equipment to be used include the

Vehicle Interaction simulator lab (FHG), plus simple acquisition devices such as cameras or other image-

based sensors for the DMS. For the evaluation of the HMI and the DMS, state of the art approaches to define

and measure relevant constructs as mode-awareness, system transparency, situation awareness, driver-

readiness-to-take-over, trust, distraction, and vigilance will be applied. The HMI concept should be

applicable to the above defined and Euro NCAP and AD testing use cases and scenarios and support the

driver in order to complete the tasks and transitions in those use cases in a safe manner. Additional use cases

can be defined with respect to ongoing research projects and publications. Hints for operationalization of

testing use cases for transitions can be found in [10, 23, 25-30] or research initiatives like the German large-

scale research project PEGASUS [31], which started in 2016 and aims at developing testing procedures and

scenarios for automated vehicles.

Furthermore factors that influence the acceptance of automated driving vehicles will constantly be monitored

during the development process of HMI and DMS concepts. Therefore usability should be overall good

(80% of interaction elements should be clear to end users). In [32] a model that aims at finding relevant

factors, which explain and predict the acceptance and purchase behavior for ADAS is depicted. The model

allows an estimation of acceptance early in the Product Development process of an ADAS and the most

relevant factors can be incorporated as target states during the whole development process of the HMI

concepts. Beside of pragmatically characteristics of ADAS like usability or perceived usefulness also

psychological and emotional factors like comfort, perceived image, perceived attractiveness, individual

motives (eco-friendliness, technical curiosity, fun-to-drive) and social norm, which predominantly influence

the buying behavior of ADAS, are taken into account. Especially in early stages of the development process

of innovative products, acceptance-related behavior can hardly be measured directly since buying and usage

behavior related to ADAS is technically not possible. However, acceptability and attitudes toward the ADAS

and perceived features of the systems can be measured and thus considered in further steps of the

development process. This idea of iterative approach has proven to be successful in the area of usability

engineering and thus also settled down in the ISO-9241-210 [33].

6.2.1 Scenario

To be determined.

6.2.2 Experimental design

To be determined.

6.2.3 Hypothesis

To be determined.

Page 27: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 27 of 33

6.2.4 Dependent variables

6.2.4.1 Objective measures

• Time Exposed Time to Collision:

• Time Integrated Time to Collision

• Distance Headway

• Time Headway

• Mean absolute lateral position

• Standard Deviation of Lane position

• Steering reaction time

• Driver’s behaviour (gaze at relevant objects, interventions like hitting brake pedal)

• Reaction time of the driver

6.2.4.2 Subjective measures

• SHAPE Automation Trust Index (SATI)

• Van Der Laan Technology Acceptance Scale

• System Usability Scale

• Nasa Raw-Task Load

• Arnett Invertory of Sensation Seeking (AISS)

• The mini Driver Behavior Questionnaire

• DSI – TBA

• Generic willingness to use technology scale (trust in tech, confidence etc.)

Usability related measures

• Understand ability of HMI elements (e.g. pictograms)

• Preferences concerning different variations of HMI elements

• Ability to discriminate different stimuli (e.g. warning sounds)

• Task performance (appropriate behaviour)

6.3 Test#3

Test 3 will be carried out in M18 at VTI, Sweden. While test 2 focuses on the HMI and early DMS concepts,

test 3 is focusing on the whole system. The purpose of these tests is to verify the virtual prototypes including

the user interface developed in WP3 with the algorithms developed in WP4 before proceeding with the

development for the on-road prototypes. The test will be carried out at VTI using both a full scale moving

base driving simulator for the drivers view as well as exploring the view of other drivers (i.e. the driver in the

simulator interacts with automated vehicles). For pedestrians a pedestrian simulator incorporating a model of

the city of Lund in Sweden will be used. The pedestrian can walk around freely in the modelled city

equipped with a HTC-vive head mounted display (or a comparable state of the art model that is used in M18

of the project) interacting with automated vehicles responding to the pedestrians movements and intentions.

6.3.1 Apparatus

The VTI simulator IV (Sim IV) is the fourth advanced driving simulator designed and built at The Swedish

National Road and Transport Research Institute (VTI). The simulator, taken into operation 2011, has an 8

degrees of freedom (DoF) moving base, a field of view (FoV) of 180 degrees and features a system for rapid

cabin exchange [34]. Furthermore, according to Jansson, et al. [34], the visual system consist of a forward

screen and two, or more, LCD displays. The LCD displays are used as rear-view mirrors and the number

depends on which cabin is used. The forward screen (see Figure 3) uses front projection technique and

currently 9 projectors, with a resolution of 1280x800 pixels, projects the image on a curved screen with a

diameter that varies between 1.8 (to the left) and 3.1 m (to the right) and a height of 2.5 m. The field of view

is approximately 180×50 degrees. This system can generate displacements in all three translational Degrees

Page 28: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 28 of 33

of Freedom (DoF) as well as the three rotational DoF. The entire moving base is therefore said to have 8

DoF, indicating that the actuator space has 8 DoF [34].

Figure 3 VTI Driving Simulator IV to be used at Test#3.

6.3.2 Part 1: Driving simulation

Automated driving using moving base driving simulator* This test will focus on use cases 3, 4 and 5 and will

be carried out at a full scale moving base driving simulator at VTI. 24 drivers will be included in the

experiment equally distributed between men and women. They will be interviewed on their experience and

understanding of the system and by the use of both rating scales and physiology their stress, workload, trust

in automation and fatigue will be measured. Two exercises are planned for this stage based on simulators,

targeting drivers and other road users.

6.3.2.1 Scenario

The Scenarios for the VTI experiments are contingent on previous experiments conducted in WP5, thus they

are yet to be determined.

6.3.2.2 Hypothesis

The hypothesis is contingent on previous experiments in Wp5, and are thus yet to be determined.

6.3.2.3 Dependent variables

Additional dependent variables may be added based on previous experiments in WP5, particularly related to

the pedestrian simulation.

6.3.2.3.1 Objective measures

• Time Exposed Time to Collision:

• Time Integrated Time to Collision

• Distance Headway

• Time Headway

• Mean absolute lateral position

• Standard Deviation of Lane position

• Steering reaction time

6.3.2.3.2 Subjective measures

• SHAPE Automation Trust Index (SATI)

• Van Der Laan Technology Acceptance Scale

Page 29: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 29 of 33

• System Usability Scale

• Nasa Raw-Task Load

• Arnett Invertory of Sensation Seeking (AISS)

• The mini Driver Behavior Questionnaire

• DSI – TBA

• Generic willingness to use technology scale (trust in tech, confidence etc.)

6.3.3 Part 2: Driving simulation with automated vehicles in surrounding

Automated driving algorithms will be implemented to run on the surrounding traffic objects in the simulator.

This will enable the study of driver acceptance and behaviour when experiencing ‘mixed-traffic’ in which a

combination of automatically and manually driver vehicles co-exists. Some previous research has indicated

that drivers cannot identify whether a vehicle is driven by a set of algorithms, or by a human if these results

prove consistent, there are substantial implications for driving safety.

6.3.4 Part 3: Pedestrian simulation

Pedestrian interaction with automated vehicles using HTC Vive (or similar) head mounted display. This test

will focus on use case 1 and 2. 50 pedestrians will be included in the simulator study and they will be

interviewed on their experience of the interaction with the automated vehicle and their understanding of the

automated vehicle and its intentions. The experiment will be carried out by VTI, the user interface will come

from WP3 and input on vehicle behaviour in interaction with pedestrians will come from WP4 partners. A

web version of the experiment will also be prepared to be able to gather input from stakeholders. The aim is

to collect input from at least 50 stakeholders from 10 different countries.

6.4 Test#4

Test 4 in month 24 is focusing primarily on testing the algorithms developed in WP4 as well as evaluating

the test methodology while test 5 in month 30 is focusing on the complete system including HMI and DMS.

Pedestrian dummies used as targets in ADAS tests and Euro NCAP ratings have not an articulated head

which makes it possible during the test of the demonstrators to interact with the pedestrian and to have better

prediction of its intentions and behaviour. So UTAC will propose a new pedestrian target and a new test

which consider a pedestrian target with different head movements and evaluate the car capability to perceive,

understand, predict and manage these movements. In 5.4 there will be 2 testing iterations (test 4 and 5), each

during 3 weeks of tests where test 4 focus on algorithms and test methodology while test 5 focus on full

system (algorithm and HMI). Test 4 and 5 includes 20 tests for the 10 scenarios on the Linas-Montlhéry test

track in France. The results from T5.4 will serve as input to WP3, WP4 and to Regulation & Euro NCAP

groups

Test #4 (Stage3) will be held in Linas-Montlhéry, France (UTAC); Month 24; the aim is to test Automated

car on a test track; equipments to be used include Remote vehicles and VRU (UTAC), training tracks

(UTAC), and the BRAVE equipped automated car (UAH).

6.4.1.1 Experimental design

The experimental design for test #4 is contingent on the progress in WP 3 and WP4, as well as the results

from earlier experiments in WP 5, thus this has yet to be determined, and will be added when there is

sufficient information.

Page 30: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 30 of 33

6.4.1.2 Apparatus

We plan to use complex test equipment for repeatable and measurable and secure tests and evaluations:

driving robots, synchronisation and D-GPS positioning, data analyse and post-treatment tools, secure and

deformable targets of pedestrian and car, with propulsion and remote control systems, showed in the

following photos :

Figure 4 Equipment proposed to be used during Test#4.

6.5 Test#5

Test #5 (Stage3) will be held in Linas-Montlhéry, France (UTAC); Month 30; the aim is to test Automated

car on a test track; equipment to be used include Remotely controlled vehicles and VRU (UTAC), training

tracks (UTAC), and the BRAVE-equipped automated car (UAH).

The whole design for test #5 is contingent on the progress in WP3 and WP4, as well as the results from

earlier experiments in Test#4, thus this has yet to be determined, and will be added when there is sufficient

information.

6.6 Test#6

Test #6 (Stage3) will be held in Barcelona, Spain (ACASA); Month 36; it will present the final

demonstration in operational environment; equipment to be used include the Catalonia Living Lab

(ACASA), and the BRAVE-equipped automated car (UAH).

Description

In the final test of BRAVE (test 6) the full concept will be demonstrated and tested at the final demonstration

in Barcelona (at the Catalonia Living Lab). An important part of test 6 is to coordinate an eventual final

demonstration in the so called “Catalonia Living Lab”, including the management of the permits to drive

autonomous cars in open roads. For the final event users and stakeholders will be invited to test the vehicles

on real roads. To make sure that there is a good representation of users and stakeholders the aim is to

coordinate this event with some other big event such as the Barcelona International Motorshow.

The experimental design, apparatus, scenario, hypothesis and variables to be measured depend on the

findings and conclusions obtained during previous tests and are to be decided.

Page 31: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 31 of 33

7 Conclusions

This deliverable details the experimental procedures proposed for the BRAVE project, both for the early test

track benchmarking tests, as well as the research and development activities as part of the V-ISO model for

testing in simulator, and finally on test tracks and in public traffic. It must be noted that this document will

be continuously updated to reflect any changes in the experimental plans based on the outcomes of planned

research activities, something that is crucial for the efficiency and value of utilising the V-ISO model for

research and development activities.

Page 32: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 32 of 33

References

[1] S. Bischoff and F. Diederichs, "User centered HMI development applying the V-ISO model for

collaborative car-2-x driver assistance systems," presented at the Stuttgarter Symposium für

Produktentwicklung (SSP) Stuttgart, 2015.

[2] SAE J2944, "Surface Vehicle Recommended Practice: Operational Definitions of Driving

Performance Measures and Statistics," 2015.

[3] A. R. A. Van der Horst, A time based analysis of road user behaviour in normal and critical

encounters (no. HS-041 255). 1990.

[4] A. Kassner, "MEET THE DRIVER NEEDS BY MATCHING ASSIST," in Proceedings of

European Conference on Human Centred Design for Intelligent Transport Systems, 2008, p. 327.

[5] B. Farber, "Designing a distance warning system from the user point of view," APSIS report, Glonn-

Haslach: Institute fur Arbeitspsychologie and Interdisziplinare Systemforchung, 1991.

[6] J. Hogema and W. Janssen, "Effects of intelligent cruise control on driving behaviour: a simulator

study," 1996.

[7] M. M. Minderhoud and P. H. Bovy, "Extended time-to-collision measures for road traffic safety

assessment," Accident Analysis & Prevention, vol. 33, no. 1, pp. 89-97, 2001.

[8] A. Kircher, M. Uddman, and J. Sandin, Vehicle control and drowsiness. Statens väg-och

transportforskningsinstitut, 2002.

[9] N. Mullen, M. Bédard, J. A. Riendeau, and T. Rosenthal, "Simulated lane departure warning system

reduces the width of lane that drivers use," Advances in Transportation Studies, 2010.

[10] A. Eriksson and N. A. Stanton, "Takeover Time in Highly Automated Vehicles: Noncritical

Transitions to and From Manual Control," Human Factors, vol. 59, no. 4, pp. 689-705, Jun 2017.

[11] D. M. Dehn, "Assessing the impact of automation on the air traffic controller: the SHAPE

questionnaires," Air traffic control quarterly, vol. 16, no. 4, p. 127, 2008.

[12] J. D. Van Der Laan, A. Heino, and D. De Waard, "A simple procedure for the assessment of

acceptance of advanced transport telematics," Transportation Research Part C: Emerging

Technologies, vol. 5, no. 1, pp. 1-10, 1997.

[13] J. Brooke, "SUS-A quick and dirty usability scale," Usability evaluation in industry, vol. 189, no.

194, pp. 4-7, 1996.

[14] J. C. Byers, A. Bittner, and S. Hill, "Traditional and raw task load index (TLX) correlations: Are

paired comparisons necessary," Advances in industrial ergonomics and safety I, pp. 481-485, 1989.

[15] S. G. Hart and L. E. Staveland, "Development of NASA-TLX (Task Load Index): Results of

Empirical and Theoretical Research," Advances in psychology, vol. 52, pp. 139-183, 1988.

[16] J. Arnett, "Sensation seeking: A new conceptualization and a new scale," Personality and individual

differences, vol. 16, no. 2, pp. 289-296, 1994.

[17] J. Reason, A. Manstead, S. Stradling, J. Baxter, and K. Campbell, "Errors and violations on the

roads: a real distinction?," Ergonomics, vol. 33, no. 10-11, pp. 1315-32, Oct-Nov 1990.

[18] R. Rowe, G. D. Roman, F. P. McKenna, E. Barker, and D. Poulter, "Measuring errors and violations

on the road: a bifactor modeling approach to the Driver Behavior Questionnaire," Accid Anal Prev,

vol. 74, no. Supplement C, pp. 118-25, Jan 2015.

[19] K. Spolander, "DRIVERS'ASSESSMENT OF THEIR OWN DRIVING ABILITY," 0347-6030,

1983.

[20] P. H. van Westendorp, "NSS Price Sensitivity Meter (PSM)–A new approach to study consumer

perception of prices," in Proceedings of the ESOMAR Congress, 1976, pp. 139-167.

[21] B. Kupiec and B. Revell, "Measuring consumer quality judgements," British Food Journal, vol. 103,

no. 1, pp. 7-22, 2001.

[22] American Psychological Association, "Ethical principles of psychologists and code of conduct,"

American psychologist, vol. 57, no. 12, pp. 1060-1073, 2002.

[23] V. A. Banks, A. Eriksson, J. O'Donoghue, and N. A. Stanton, "Is partially automated driving a bad

idea? Observations from an on-road study," Applied Ergonomics, vol. 68, pp. 138-145, 4// 2018.

[24] J. C. De Winter, "Using the Student's t-test with extremely small sample sizes," Practical

Assessment, Research & Evaluation, 2013.

[25] I. Petermann-Stock, L. Hackenberg, T. Muhr, and C. Mergl, "Wie lange braucht der Fahrer? Eine

Analyse zu Übernahmezeiten aus verschiedenen Nebentätigkeiten während einer

Page 33: D5.1 Test methodology and use case specificationbrave-project.eu/wp-content/uploads/2018/02/D5.1... · Deliverable 5.1 contains a description of the test methodology and use case

Deliverable D5.1 BRAVE

723021 Page 33 of 33

hochautomatisierten Staufahrt," 6. Tagung Fahrerassistenzsysteme. Der Weg zum automatischen

Fahren, 2013.

[26] N. Schömig, V. Hargutt, A. Neukum, I. Petermann-Stock, and I. Othersen, "The interaction between

highly automated driving and the development of drowsiness," Procedia Manufacturing, vol. 3, pp.

6652-6659, 2015.

[27] V. Melcher, S. Rauh, F. Diederichs, H. Widlroither, and W. Bauer, "Take-over requests for

automated driving," Procedia Manufacturing, vol. 3, pp. 2867-2873, 2015.

[28] F. Diederichs, S. Bischoff, H. Widlroither, P. Reilhac, K. Hottelart, and J. Moizard, "Smartphone

integration and SAE level 3 car automation–a new cockpit concept and its evaluation in a car

simulator," in Proceedings of the 8th VDI conference Der Fahrer im, 2015, vol. 21.

[29] P. Bazilinskyy, A. Eriksson, B. Petermeijer, and J. de Winter, "Usefulness and satisfaction of take-

over requests for highly automated driving," presented at the Road Safety and Simulation, The

Hauge, 2017.

[30] A. Eriksson, V. A. Banks, and N. A. Stanton, "Transition to manual: Comparing simulator with on-

road control transitions," Accident Analysis & Prevention, vol. 102, pp. 227-234, May 2017.

[31] PEGASUS Project, "PEGASUS project,"

http://www.bmwi.de/Redaktion/DE/Pressemitteilungen/2016/20160119-bmwi-startet-

forschungsprojekt-testverfahren-hochautomatisierte-fahrzeuge.html, 2017.

[32] S. Arndt, "Gegenstand der Arbeit: Fahrerassistenzsysteme," in Evaluierung der Akzeptanz von

Fahrerassistenzsystemen: Springer, 2011, pp. 21-32.

[33] I. DIS, "9241-210: 2010. Ergonomics of human system interaction-Part 210: Human-centred design

for interactive systems," International Standardization Organization (ISO). Switzerland, 2009.

[34] J. Jansson, J. Sandin, B. Augusto, M. Fischer, B. Blissing, and L. Källgren, "Design and

performance of the VTI Sim IV," in Driving Simulation Conference, 2014.