assessing effectiveness of information presentation using

85
Wright State University Wright State University CORE Scholar CORE Scholar Browse all Theses and Dissertations Theses and Dissertations 2017 Assessing Effectiveness of Information Presentation Using Assessing Effectiveness of Information Presentation Using Wearable Augmented Display Device for Emergency Response Wearable Augmented Display Device for Emergency Response Sriram Raju Chandran Wright State University Follow this and additional works at: https://corescholar.libraries.wright.edu/etd_all Part of the Operations Research, Systems Engineering and Industrial Engineering Commons Repository Citation Repository Citation Chandran, Sriram Raju, "Assessing Effectiveness of Information Presentation Using Wearable Augmented Display Device for Emergency Response" (2017). Browse all Theses and Dissertations. 1724. https://corescholar.libraries.wright.edu/etd_all/1724 This Thesis is brought to you for free and open access by the Theses and Dissertations at CORE Scholar. It has been accepted for inclusion in Browse all Theses and Dissertations by an authorized administrator of CORE Scholar. For more information, please contact [email protected].

Upload: others

Post on 11-Nov-2021

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Assessing Effectiveness of Information Presentation Using

Wright State University Wright State University

CORE Scholar CORE Scholar

Browse all Theses and Dissertations Theses and Dissertations

2017

Assessing Effectiveness of Information Presentation Using Assessing Effectiveness of Information Presentation Using

Wearable Augmented Display Device for Emergency Response Wearable Augmented Display Device for Emergency Response

Sriram Raju Chandran Wright State University

Follow this and additional works at: https://corescholar.libraries.wright.edu/etd_all

Part of the Operations Research, Systems Engineering and Industrial Engineering Commons

Repository Citation Repository Citation Chandran, Sriram Raju, "Assessing Effectiveness of Information Presentation Using Wearable Augmented Display Device for Emergency Response" (2017). Browse all Theses and Dissertations. 1724. https://corescholar.libraries.wright.edu/etd_all/1724

This Thesis is brought to you for free and open access by the Theses and Dissertations at CORE Scholar. It has been accepted for inclusion in Browse all Theses and Dissertations by an authorized administrator of CORE Scholar. For more information, please contact [email protected].

Page 2: Assessing Effectiveness of Information Presentation Using

ASSESSING EFFECTIVENESS OF INFORMATION PRESENTATION USING

WEARABLE AUGMENTED DISPLAY DEVICE FOR EMERGENCY RESPONSE

A thesis submitted in partial fulfillment of the

requirements for the degree of

Master of Science in Industrial and Human Factors Engineering

By

SRIRAM RAJU CHANDRAN

B.E., Anna University, 2011

2017

Wright State University

Page 3: Assessing Effectiveness of Information Presentation Using

WRIGHT STATE UNIVERSITY

GRADUATE SCHOOL

April 28, 2017

I HEREBY RECOMMEND THAT THE THESIS PREPARED UNDER MY

SUPERVISION BY Sriram Raju Chandran ENTITLED Assessing Effectiveness of

Information Presentation Using Wearable Augmented Display Device for Emergency

Response BE ACCEPTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

FOR THE DEGREE OF Master of Science in Industrial and Human Factors Engineering.

Subhashini Ganapathy, Ph.D.

Thesis Director

Jaime E. Ramirez-Vick, Ph.D.

Chair, Department of Biomedical,

Industrial and Human Factors

Committee on Engineering

Final Examination

Subhashini Ganapathy, Ph.D.

Mary C. McCarthy, M.D., F.A.C.S.

Mary E. Fendley, Ph.D.

Robert E. W. Fyffe, Ph.D.

Vice President for Research and

Dean of the Graduate School

Page 4: Assessing Effectiveness of Information Presentation Using

iii

ABSTRACT

Chandran, Sriram Raju. M.S.I.H.E., Department of Biomedical, Industrial and Human

Factors Engineering, Wright State University, 2017. Assessing Effectiveness of

Information Presentation Using Wearable Augmented Display Device for Emergency

Response.

Small screen wearable devices are becoming ubiquitous in the medical field; especially in

the fields of surgery and trauma care. Technological intervention that supports data

transfer of sending summary of the patient vitals through the transfer of care would be of

great benefit to trauma care department. This research focuses on information

presentation for wearable augmented reality device to improve human decision making

during transfer of care for emergency response, and to improve user experience and

reduce cognitive workload. Google Glass ™ device acts as heads up display for users, in

this case being medical responders in hospital trauma care. The display being a small

form factor poses a challenge in presenting information and at the same time making sure

that there is no cognitive overload to the user. This could potentially help medical

responder in the trauma care center to prepare for treatment materials like medicine,

diagnostic procedures, bringing in specialized doctors or consulting the advice of

experienced doctors and calling in support staff as required. The results of this

experiment can make significant contribution to design guidelines for information

presentation on small form factors especially in time critical decision-making scenarios.

Page 5: Assessing Effectiveness of Information Presentation Using

iv

TABLE OF CONTENTS

1. INTRODUCTION ....................................................................................................... 1

1.1 Trauma Care.................................................................................................. 2

1.2 Augmented Reality ....................................................................................... 8

1.2.1 Military ....................................................................................................... 13

1.2.2 Education .................................................................................................... 14

1.2.3 Healthcare ................................................................................................... 15

1.3 Information Presentation ............................................................................. 18

1.4 Visual Search .............................................................................................. 20

1.5 Electroencephalography .............................................................................. 22

2. RESEARCH OBJECTIVES ...................................................................................... 25

3. METHODOLOGY .................................................................................................... 28

3.1 Design of Experiment ................................................................................. 28

3.1.1 Patient Vitals Simulation ............................................................................ 29

3.1.2 Visual Search Task ..................................................................................... 30

3.2 System Description ..................................................................................... 30

3.2.1 Patient Vitals Simulation ............................................................................ 31

3.2.2 Visual Search Task ..................................................................................... 34

3.3 Testing Procedures ...................................................................................... 35

3.3.1 Patient Vitals Simulation ............................................................................ 35

3.3.2 Visual Search Task ..................................................................................... 37

3.4 Dependent Measures and Anaysis .............................................................. 39

3.4.1 Patient Vitals Simulations ........................................................................... 39

3.4.2 Visual Search Task ..................................................................................... 41

4. RESULTS .................................................................................................................. 42

5. CONCLUSION ......................................................................................................... 51

Page 6: Assessing Effectiveness of Information Presentation Using

v

6. IMPLICATIONS ....................................................................................................... 53

7. FUTURE WORK ...................................................................................................... 54

8. REFERENCES .......................................................................................................... 55

Appendix I - Questionnaire ............................................................................................... 68

Appendix II – NASA TLX Mental Workload Rankings .................................................. 72

Appendix III - Google Glass™ Patient information streaming – Scenarios and Triage

indices ............................................................................................................................... 74

Appendix IV – Triage Indices ........................................................................................................ 76

Page 7: Assessing Effectiveness of Information Presentation Using

vi

LIST OF FIGURES

Figure 1: Patient Vitals system model ................................................................................ 6

Figure 2: (a)Pokemon Go, the Augmented Reality game where character appears on

screen depending on the geospatial location (left); (b)Nearest wiki, shows name of the

building and distance (right). .............................................................................................. 9

Figure 3: (a)Monocular HMD - Elbit DASH (left), Atac, R., 2012; (b) Binocular HMD -

Kaiser Electro-Optics SIM EYE (right) Bloom, M. B., Salzberg, A. D., & Krummel, T.

M., 2002 ............................................................................................................................ 11

Figure 4: Aurasma app showing the work of students who augmented information about

the bridge .......................................................................................................................... 15

Figure 5: CT scan results superimposed on ankle ............................................................ 16

Figure 6: Emotiv Epoc ...................................................................................................... 23

Figure 7: Participant taking ATLS test during the patient vitals application simulation .. 29

Figure 8: Participant performing visual search task ......................................................... 30

Figure 9: Application Navigation of Patient vitals simulation task .................................. 33

Figure 10: Participant performing visual search task ....................................................... 34

Figure 11: Participant performing visual search task ....................................................... 34

Figure 12: Screen layout of user interface 1 ..................................................................... 36

Figure 13: Screen layout of user interface 2 ..................................................................... 36

Figure 14: Screen layout of user interface 3 ..................................................................... 37

Figure 15: Emotiv Epoc terminals .................................................................................... 38

Page 8: Assessing Effectiveness of Information Presentation Using

vii

Figure 16: Types of screen layout for the visual search task with varying size, color, and

target location: polychromatic small (Top left), polychromatic large (Top right),

monochromatic large (Bottom left), monochromatic small (Bottom right) ..................... 38

Figure 17: Average response time with respect to experience ......................................... 43

Figure 18: Response time for different UI elements ......................................................... 44

Figure 19: Average electric data in micro volts against each EEG channel ..................... 45

Figure 20: Heat map showing activity in the temporal and superior parietal cortex ........ 46

Figure 21: Heat map showing activity in the temporal, superior parietal and pre frontal

cortex region ..................................................................................................................... 46

Figure 22: NASA TLX score across experience............................................................... 47

Figure 23: User response for preference of UI elements .................................................. 47

Figure 24: SUS scores for the 3 UIs ................................................................................. 48

Page 9: Assessing Effectiveness of Information Presentation Using

viii

LIST OF TABLES

Table 1: Hypothesis Related to Research Questions ........................................................ 25

Table 2: User response for device ..................................................................................... 49

Table 3: User response for Patient Vitals application ....................................................... 49

Table 4: User response for User Interface ........................................................................ 50

Page 10: Assessing Effectiveness of Information Presentation Using

1

1. INTRODUCTION

Small screen devices are foreseen as ubiquitous in the medical field especially in

the fields of surgery and trauma care (Glauser, W., 2013). In this era, where Internet of

Things (IoT) is believed to be the future, augmented reality allows interaction between

the digital and real world. It can deliver rich and meaningful digital overlay on the real

world. The abilities of this technology is well identified and research is being done in

different domains such as education, medicine, aviation, and so on (Schmidt, G. W., &

Osborn, D. B., 1995; Casey, C. J., 1999, Szalavári, Z., Eckstein, E., & Gervautz, M.,

1998; Casey, C. J., & Melzer, J. E., 1991; Foote, B. D., 1998). The purpose of this study

was to analyze the effects of information complexity and mental workload on trauma care

providers/surgeons during emergency response scenarios for augmented display device.

Using augmented display for medical responders in hospital trauma care can optimize the

communication channel and information flow. Wearable devices like Google Glass™,

being a small form factor, poses challenges in presenting information and making sure

that there is no cognitive overload for the user. Other challenges in small form factor

devices include low information density that can influence the user’s readability and

optimum navigation to access the different features of the applications. This chapter will

focus on the different approaches by trauma care department to enhance communication,

different applications of augmented display devices, effect of information presentation in

decision making and research done in the area of visual search.

Page 11: Assessing Effectiveness of Information Presentation Using

2

1.1 Trauma Care

A trauma care center also known as "casualty department" or "accident &

emergency center" specialize in services to care for victims of major trauma. Trauma

surgeons treat patient injuries which include falls, motor vehicle crashes, motorcycle

crashes, assaults, gunshot wounds, stab wounds, burns, and so on. The quality of health-

care is determined by ability and attitudes of the personnel. Trauma centers are classified

based on the resources it can provide (Level I, II, III, IV and V). The highest levels of

trauma centers (Level I and II) have access to specialist medical and nursing care

including emergency medicine, trauma surgery, critical care, neurosurgery, orthopedic

surgery, anesthesiology and radiology, as well as highly sophisticated surgical and

diagnostic equipment. Lower levels of trauma centers (Level IV and V) may only be able

to provide initial care and stabilization of a traumatic injury and arrange for transfer of

the victim to a higher level of trauma care. (Reference guide of Suggested Classification,

2016). Patients in the ED are usually admitted in an unconscious state and require urgent

medical intervention to survive. The leading causes of trauma are motor vehicle

collisions, falls, and assaults with a deadly weapon. According to the Centre for Disease

Control and Prevention (CDC), in 2014, unintentional injuries (accidents) are the leading

cause of death for American children and adults ages 1–44.

An effective emergency medical system should not only provide emergency care,

but also integrate all the interdependent components from the initial emergency call,

ambulance dispatch, ambulance transportation, pre-hospital ambulance care and arrival at

the healthcare facility (El-Masri, S., & Saddik, B., 2012). Previous studies have shown

emergency response communication systems being integrated with telecommunication

Page 12: Assessing Effectiveness of Information Presentation Using

3

systems, geographic location of the ambulance deployed, and hospital availability to

address factors such as delayed ambulance dispatch, incorrect pre-hospital treatments,

incomplete and inaccurate clinical handover, emergency department overcrowding, and

ambulance diversion can delay and have an impact on effective outcomes of care.

(Andersson, T., & Värbrand, P., 2007; Greenko, J., Mostashari, F., Fine, A., & Layton,

M., 2003, Lee, S., 2011; Repede, J. F., & Bernardo, J. J., 1994; Slovis, C. M., Carruth, T.

B., Seitz, W. J., Thomas, C. M., & Elsea, W. R., 1985).

Research by Guise et al. (2015), states that EMS relies on the knowledge

repository of the personnel on clinical assessment and decision making. So it would be

important to add the trauma expert in the patient transfer process. Therefore, effective

communication between staff in healthcare is important and especially critical in

emergency situations. To address this, organizations and governments around the world

have adapted improved communication techniques and also used modern technology to

assist them Researchers from Sweden (Andersson, T. & Värbrand, P., 2007) and

Louisville, Kentucky, USA (Repede, J. F., & Bernardo, J. J., 1994) have proposed a

decision support system for simulating ambulance dispatch and communication methods

to find out the best locations to place the ambulances in a given area to increase the

preparedness of the trauma care personnel. Andersson, T. & Värbrand, P. (2007) have

also presented decision support tools that describe dynamic ambulance relocation and

automatic ambulance dispatching. New York City Department of Health has used New

York City Emergency Medical Services (EMS) ambulance dispatch data to examine call

types, age, symptoms and other clinical data to examine potential biases associated with

ambulance dispatch-based surveillance (Greenko, J., Mostashari, F., Fine, A., & Layton,

Page 13: Assessing Effectiveness of Information Presentation Using

4

M., 2003). This data helped them to improve preparedness. The research group examined

data for a period of time when there was communitywide rise in influenza like illness

(ILI). Slovis, C. M., Carruth, T. B., Seitz, W. J., Thomas, C. M., & Elsea, W. R. (1985)

developed decision tree priority dispatch system using preplanned response modes to

screen and rank incoming requests for EMS for emergency medical services (EMS) was

developed and implemented in Atlanta and Fulton County, Georgia. The dispatch system

shortened the average response time from 14.2 minutes to 10.4 minutes for the 30% of

patients deemed most urgent.

Once a trauma case is reported, the information reaches the emergency team in

the hospital and then air and ground transfer are assigned requested by the victim or the

person who reports at the scene. The first responder on scene decides the urgency of care

required. The present emergency response protocols involve the information sent by first

responders to the most appropriate trauma center around. Patient evaluation is done by

assessing the scenario, severity of injury, the first responder's knowledge repository, and

emergency protocol (Shen, S. & Shaw, M., 2004). Quality and timely organization of

treatment during transfer of care is given by prioritizing patients based on severity of

their injuries. The subsequent updates the trauma center receives occur only when the

patient reaches the emergency department. The first responders give a brief summary of

what they observed on the ground such as: vital signs that they manually noted, pictures

taken, any changes in vitals during the transport, any signs of pain in the patient’s body,

any kind of care given during transport, the type of incident that had been reported by

witnesses, and the duration of transport (Bost, N., Crilly, J., Patterson, E., & Chaboyer,

W., 2012). The present system is chaotic and requires a need to reduce response time

Page 14: Assessing Effectiveness of Information Presentation Using

5

(Carr, B. G., Caplan, J. M., Pryor, J. P., & Branas, C. C., 2006). This would also require

the transport vehicle (air or ground) to be equipped with an appropriate sensor network

and a medium to transfer data smoothly to the trauma care center. Communication

between hospital and ambulance is critical. Study suggests that air transfer is significantly

faster than ground transfer when it comes to distances greater than 50 miles but for

distances less than 50 miles there is no significant difference (Diaz, M. A., Hendey, G.

W., & Bivins, H. G., 2005). Studies report different response times like on scene arrival,

on scene time and total response time (Frykberg, E. R., & Tepas, J. J.,3rd., 1988; Carr, B.

G., Caplan, J. M., Pryor, J. P., & Branas, C. C., 2006). Research by Ek, B., & Svedlund,

M. (2015) shows that involving an expert like registered nurse in ambulance dispatch

process helped in better use of equipment, medical protocol and increased patient safety.

Research by Hedges, J. R., Feero, S., Moore, B., Shultz, B., & Haver, D. W. (1988)

looked at different variables obtained from the ER department. Variables obtained

included age, sex, mechanism of injury, EMS response time intervals, emergency

department (ED) and inpatient disposition, revised trauma scores (RTS), injury severity

scale (ISS) scores and outcome (survived to leave hospital; died). They stated that least

amount of time required in the out-of-hospital setting should be spent, allowing only for

performance of essential procedures such as immobilization and any requisite intubation

and intravenous access. They also state that the arguments in the "load and go" versus

"stay and stabilize" debate have largely been based on common sense with limited

supportive data.

Mobile health monitoring systems have been used extensively for triage purposes.

Previous work on integrating technology to emergency care systems has proved to be

Page 15: Assessing Effectiveness of Information Presentation Using

6

helpful. Wac et al., (2004) developed MobiHealth System, which explains the different

pros and cons of wireless network transmission of patients’ vitals data. The system can

support sensors and is connected through a body area network. Fischer, M., Lim, Y. Y.,

Lawrence, E., & Ganguli, L. K. (2008) have developed ReMoteCare, a remote healthcare

monitoring using a Wireless Sensor Network (WSN) Pulse Oximeters, environmental

sensors and streaming video to monitor patients. Montgomery et al., (2004) developed a

body worn sensor network called Lifeguard – a personal physiological monitor for

extreme environments like patient transfer, military services and in space.

This study utilizes the idea of using wireless transmission of sensor data from the

patient to the emergency practitioner. Figure 1 shows a schematic diagram of the system

where the emergency practitioner is able to view patient data that can potentially allow

them to make faster decisions on patient treatment.

Figure 1: Patient Vitals system model

Appendix III shows the different scenarios used in the experiment for trauma care

personnel’s evaluation. The scenarios were created based on real-time trauma case

observations and also by consulting SMEs. One of the scenarios presented was where an

Page 16: Assessing Effectiveness of Information Presentation Using

7

individual is involved in an explosion while working in an oil factory, he sustains 80%

3rd degree burns (Appendix III, Scenario 1) with extremely high heart rate. In a

conventional ER system, the first responders transfer the patient to the designated trauma

care center and summarize their observation upon arrival. By applying the proposed

system as in the system model shown in Figure 1, during the patient transfer the trauma

care personnel in the trauma care center can wirelessly receive the patient vital

information and verify by communicating with the first responders (if required) if proper

care is being given. In this case the trauma care personnel will check if proper fluids are

being administered and the required medications have been given. This system could

decrease the time taken by the first responders to summarize the patient details at the

trauma care center.

Research activities in emergency settings have demonstrated substantial benefits

for improving patient care and management. However, there are several obstacles to

conducting proper research in such an environment. Implementation of research

strategies in emergency and trauma settings is the key to inform injury prevention

strategy. One of the major limiting factors is the dynamic nature of the environment, the

need for immediate action, the family emotional state, and so the physicians involved are

often over-burdened (El-Menyar, A., Asim, M., Latifi, R., & Al-Thani, H., 2015). Taking

the above reasons into account, research was conducted using scenarios that were

simulated onto Google Glass™. Augmented reality is seen as future technology and has

been studied in various areas such as the military, healthcare and education (Azuma et. al,

2001).

Page 17: Assessing Effectiveness of Information Presentation Using

8

1.2 Augmented Reality

Augmented Reality (AR) allows us to overlay computer graphics onto the real

world. AR interface allows users to see the real world at the same time as virtual imagery

projected by the AR device (Navab, N., Traub, J., Sielhorst, T., Feuerstein, M., &

Bichlmeier, C., 2007). In an AR interface, the user views the world through a handheld or

head mounted display (HMD) that is either see-through or overlays graphics on video of

the surrounding environment. Conventional display devices draw user’s attention onto

the screen whereas AR interfaces enhance the real-world experience. HMDs are

information-viewing devices that can provide information in a way that no other display

can. The display can use head and body movements to augment information on the real

world, replicating the way we view, navigate through, and explore the world.

Some of the applications of head mounted display are medical visualization as an

aid in surgical procedures (Schmidt, G. W., & Osborn, D. B., 1995), military vehicles for

viewing sensor imagery (Casey, C. J., 1999), gaming (Szalavári, Z., Eckstein, E., &

Gervautz, M. (1998), aircraft simulation and training (Casey, C. J., & Melzer, J. E.,

1991), and avionics display applications (Foote, B. D., 1998). Google Glass™ is a good

example of head mounted augmented display device which has been used in this study.

Some recent examples of AR applications are Pokemon Go game and Wikitude.

Pokemon Go is iOS and Android based mobile application released in 2016 as shown in

Figure 2a. User hunts for cartoon characters which randomly appear depending on the

geospatial location. Figure 2(b) shows a mobile application called nearest wiki which

augments the building names and its distance away from you by pointing the camera.

Page 18: Assessing Effectiveness of Information Presentation Using

9

Figure 2: (a)Pokemon Go, the Augmented Reality game where character appears on screen depending on the geospatial location (left); (b)Nearest wiki, shows name of the building and distance (right).

Spitzer, C. R., & Spitzer, C., (2000) have identified issues in designing HMDs as

mentioned below:

• Size and weight —size and weight of the image source is the most important.

Secondly the designers would add a supplemental illumination source is required. With

these in mind the designers would be able determine the proximity of these source

components which will further impact the device’s ease of use.

• Power — image source CRTs and AMELs require a high voltage drive whereas

LCDs have low transmission, requiring a brighter backlight for adequate luminescence.

• Resolution — resolution (pixel density) depends on the type of information is to

be displayed. Designers need to keep in mind if the image generator or sensor video is

compatible with this resolution.

• Addressability —devices such as LCDs, AMELs, and OLEDs are considered

finite addressable displays because the images are pixelated whereas CRTs are

considered as infinitely addressable as it can accommodate high density information.

• Aspect ratio —This is an important consideration when choosing an image

source because it determines the field of view of the display.

Page 19: Assessing Effectiveness of Information Presentation Using

10

• Color — The first HMDs produced were monochrome displays and then colored

displays were introduced. Color coding information helped users to segregate the type of

information. Recent HMDs has the capacity to display vast range of colors.

HMDs can be classified as the following:

Monocular — a single channel viewed by a single eye. They are usually light,

inexpensive and simple compared to the other forms. Because of these advantages, most

of the current HMD systems produced are monocular. Some examples of monocular

HMDs are the Elbit DASH, the Vision Systems International JHMCS as shown in Figure

3.a. (Atac, R., 2012), and the Google Glass™. The drawbacks associated with these

devices are:

1. laterally asymmetric center of gravity

2. focus

3. eye dominance,

4. binocular rivalry and

5. ocular-motor instability.

Biocular — a single video channel viewed by both eyes. The advantage here is

that it eliminates ocular-motor instability, is more comfortable and can show more

information than monocular design. As it is a two-eyed viewing system, stringent set of

alignment, focus, and adjustment requirements hinders the designer.

Page 20: Assessing Effectiveness of Information Presentation Using

11

Figure 3: (a)Monocular HMD - Elbit DASH (left), Atac, R., 2012; (b) Binocular HMD - Kaiser Electro-Optics SIM EYE (right) Bloom, M. B., Salzberg, A. D., & Krummel, T. M., 2002

Binocular — each eye views an independent video channel. This is the most

complex, most expensive, and heaviest of all three options. The drawbacks of binocular

HMDs are same as that of biocular’s but the key advantage of a two-eyed system is that it

provides partial binocular overlap (to enlarge the horizontal field of view). Examples are

the Kaiser Electronics HIDSS and the Kaiser Electro-Optics SIM EYE as shown in

Figure 3.b. (Bloom, M. B., Salzberg, A. D., & Krummel, T. M., 2002).

The potential medical dangers of head-mounted displays have also been

documented (Patterson, R., Winterbottom, M. D., & Pierce, B. J., 2006) and include:

decreased awareness of physical surroundings, visual interference, binocular rivalry with

latent misalignment of eyes and headaches. The authors performed intense tasks on the

device and noted that the surface temperature rises by 90% in 10 minutes of usage. Some

medical applications being explored include remote mentoring, viewing lab reports

without looking away from patients and live streaming surgeries to medical students

(Kaufmann, C., Rhee, P., & Burris, D., 1999). Privacy regulations and a reluctance

Page 21: Assessing Effectiveness of Information Presentation Using

12

among some hospital administrations to accept new technologies, hinder the use of

Google Glass™ in real-time environment (Glauser, W., 2013).

With all the different applications possible by HMDs we can see research moving

towards different directions. With advancements in sensor technology, and significant

decrease in size and weight we can see HMDs designed with AR. Overcoming challenges

such as response time delay, AR integration failures designers are able to foresee AR as

the technology of the future with significant results (Mekni, M., & Lemieux, A., 2014).

Recent studies have seen using Bluetooth technology integration with mobile and other

wearable devices, near field sensor technology with AR devices.

Google Glass™ was developed by Alphabet (formerly Google). The device looks

like a pair of eyeglasses (prescription and novelty eyeglasses can be attached if required)

consisting of a tiny computer and camera built into the frame. Users need to look up onto

a holographic screen that appears to be floating in front of them. It displays information

in a smartphone-like, but hands-free format that is operated through voice commands,

head tilts and a touchpad on the side. It can take pictures, make video calls, get

notifications and enables hands-free web searching (McNaney et. al., 2014). On the other

hand there are a few issues that has been raised in the users’ community (Nayak, K.,

Kotak, D., & Narula, H., 2014). Consumers’ concerns are the track pad, social

interactions, privacy and anonymous recording. It is also easily breakable. The face

recognition technology can be easily misused and it might turn out to be offensive for

that person. As the user needs to look up to focus on the screen, it cannot be used in tasks

that demand high cognition such as driving. The following sections give an overview of

different domains where AR has been applied.

Page 22: Assessing Effectiveness of Information Presentation Using

13

1.2.1 Military

A pilot’s primary task would be vigilance of the environment (awareness of co-

ordinates, destination, tasks in hand, horizon), and at the same time acquire and process

the visual cues obtained from the display panel. Visual cues may include orientation of

the aircraft, altitude, speed, horizon, temperature and pressure. The pilot’s tasks can be

modeled as depending on these generic sources of information. Above all the highest

priority for the pilot is to maintain stability while navigation and prevent the aircraft from

stalling. This level of cognition was partially enhanced by the use of HMDs by

providing/alerting them with important information. The U.S. military introduced HMDs

into fixed-wing aircraft in the early 1970s for targeting air-to-air missiles (Melzer, J. E.,

2000). During the late 70s, F-4 Phantom fighter jets carried Visual Targeting Acquisition

Systems (VTAS). This shows how the demand of HMDs grew in the military domain.

HMDs were sometimes referred to as Helmet-Mounted Sight (HMS) when used for target

locating tasks. One of the early sensor technologies to be integrated with the HMDs were

Forward-Looking Infrared (FLIR) which creates shades-of-grey imagery of objects from

slight differences in black-body thermal emissions. During the 2000s, research in cockpit

HMDs intensified and we could see technology like cockpit display of traffic information

(CDTI) became popular. It was specifically designed to enhance pilots’ awareness of

nearby traffic (Wickens, C. D., Hellenberg, J., & Xu, X., 2002) and the data link

communications system was designed to provide digitally uplinked communications from

air traffic control to the pilot (Navarro, C., & Sikorski, S., 1999).

Page 23: Assessing Effectiveness of Information Presentation Using

14

1.2.2 Education

AR technology helps students understand the world better. Therefore, we can say

that this technology is very valuable for the education domain. It helps students with

learning difficulties by getting them to engage and perceive information that was earlier

not possible. AR devices enable interaction with the real world with images and

computer-based input elements providing a digital platform to manipulate real objects.

Form factor of the device used plays an important role. There are some compelling

examples of education software for handheld devices.

One application created was Trails of Integrity and Ethics (Chow, E. H.,

Thadani, D. R., Wong, E. Y., & Pegrum, M., 2015) where the students walk around the

trail and discover tasks/scenarios to be solved eventually helping them learn the ethical

outcomes. The researchers observed that this approach was more beneficial compared to

online tutorials and conventional classrooms sessions with examination.

Aurasma created and interactive design tool (Bower, M., Howe, C., McCredie,

N., Robinson, A., & Grover, D., 2014) to create overlays on mobile phone platform. To

test the potential of AR in schools, Macquire ICT Innovations conducted a workshop

with high school students from years 8-10 who were asked to design overlays in a local

park. They used objects like trees, grass and sculptures to design the overlay. Figure

shows one team’s attempt with a bridge. They had come up with information like it’s

history, a small video about the bridge and the materials that were used to build it.

Page 24: Assessing Effectiveness of Information Presentation Using

15

Figure 4: Aurasma app showing the work of students who augmented information about the bridge

The researchers were able to learn the potential of AR in school environment.

They also saw the possibility of different applications like 3D interactive tours, zoo,

museum. The above examples of AR applications in educations draws a conclusion that

the visual and interactive experience that the students get from this technology is a

pivotal.

In education the learners are able to view things in a way that were never

possible in reality like cross sectional views of objects in heavy duty machinery,

constellations in the night sky, etc.

1.2.3 Healthcare

Wearable technology has drawn a lot of attention in the healthcare community.

And with the low cost commercial smart glasses released in the market recently, demand

for applications with advanced sensor technology in the medical field has gone

significantly up. Augmented reality has been used in the surgical setting since 1986 when

Page 25: Assessing Effectiveness of Information Presentation Using

16

Roberts et al. described the first integration of a surgical microscope with stereotactic

technology to superimpose a computed tomography (CT)-derived tumor contour onto the

surgical field. Navab et. al (2007) achieved AR CT scan results superimposed on the real

body as shown in Figure 5. HMDs were initially introduced into healthcare to deal with

Electronic Health records in a better way (Muensterer et. al, 2014).

Figure 5: CT scan results superimposed on ankle

Later it was used for broadcasting surgeries to facilitate remote evaluation or

teaching students in a surgeon’s perspective of vision. Most significantly, smart glasses

can present data onto the lenses and record images or videos through a unique front-

facing camera. These devices can be web-connected, wearable computers sometimes in

the shape of conventional glasses – which overcome the issue of manual input because

they are hands-free and can be controlled by voice commands. Examples of smart glasses

introduced were Google Glass™ (Google Inc., Mountain View, CA), Moverio BT-200

(Epson Inc., Suwa, Nagano, Japan), and Meta-Pro Spaceglasses (Meta Inc., San

Francisco, CA) out of which Google Glass™ has received the most exposure in

healthcare after its Explorer edition release in 2013.

Page 26: Assessing Effectiveness of Information Presentation Using

17

Companies are currently developing new software platforms, specifically for

smart glasses, that allow seamless recording for patient note transcription and video

conferencing for consults or second opinions. Literature on HMD reports documenting

their use in a variety of healthcare settings. Surgeons were surprised with the different

applications possible using this technology. One of the breakthrough application was the

overlaying of sensor data on to the real world, real time patient vitals data display onto

the screen and display computer generated images in the screen resulting in composite,

augmented reality view. Vital signs monitoring (Yilmaz, T., Foster, R., & Hao, Y., 2010),

telemonitoring (Hashimoto, D. A., Phitayakorn, R., Fernandez-del Castillo, C., &

Meireles, O., 2016)

Dr. Anil Shah performed a rhinoplasty with the help of Google Glass™. Among

the physicians who were the first to use the Google Glass™ in the clinical environment,

most reported that the device was comfortable while practicing or operating (Engelen L.,

2013). They are used for improving patient monitoring for chronically ill patients

(Pantelopoulos, A., & Bourbakis, N. G., 2010), dietary management for diabetes patients

(Wall, D., Ray, W., Pathak, R. D., & Lin, S. M., 2014)

Advances in technology have allowed devices to become more powerful and are

now able to record more data and transmit this data at a faster rate. With further

understanding of personal data, such as routines, appointments, and image analysis,

devices are able to both sense and react to their environment. Context Surgery (Portland,

OR) has been developing a “Surgical Dashboard” software system, which is able to

analyze temporal and spatial elements, such as the clinician’s calendar and low-energy

Bluetooth proximity sensors (Mitrasinovic et. al., 2015). In addition to that, this

Page 27: Assessing Effectiveness of Information Presentation Using

18

technology is used to overlay stereotactic data, such as tumor contours, and provides the

surgeon with essential information without having to look away from the operative field.

This input of data allows Google Glass™ to understand what information surgeons are

likely to need at any given moment. For example, entering the operating room can bring

the patient’s notes onto the display in anticipation of the surgical time-out and checklists

(Mitrasinovic et. al., 2015). Currently, Dashboard developed by Context surgery (2014) is

able to display Electronic Health Records, patient vital signs, test results, imaging and

scans. The surgeon was able to pre-operatively prepare a simulated image of the patient

with the projected completed operation, which was displayed on the prism display to

overlay the patient resulting in augmented reality. This allows the dashboard to be used

both in the operating room and for patient consults. Thousands of such trackers are

already being used in operating rooms, deployed by Medtronic, Siemens, BrainLAB and

other manufacturers of computer-assisted surgery technology.

1.3 Information Presentation

In light of the behavioral findings in this literature, the way information is

presented may have a significant impact on the decision-making process as well as the

ultimate option selected by the decision-maker. Indeed, Caplin. A., & Dean, M., (2011)

find that changes in the choice environment affect reservation values and the order of

search. Besedeš, T., Deck, C., Sarangi, S., & Shor, M. (2015) find that even when the

choice set is kept constant, a tournament-style choice architecture in which choices are

presented sequentially improves the option selected. Sonntag, A. (2015) finds that

delayed display of information affects the search process but not decision quality.

Industries and academia have been showing immense interest in User Experience during

Page 28: Assessing Effectiveness of Information Presentation Using

19

recent times as the practitioners have become well aware of the limitations of traditional

usability framework which focuses primarily on user cognition and user performance

attributes in human-technology interactions (Forlizzi, J. & Battarbee, K., 2004). Their

research also highlighted on the fact that the models of emotion and experience were used

to help the research team think about how a person’s relationship with the product might

change over time. Users need to attain fluency with the product early on, to ensure that

they will continue to use the product and not abandon it in frustration. This means that

minimal time should be invested in learning the basic controls. Over time, the product

should enable cognitive experiences as users begin to learn about habits and make the

necessary changes in behavior. Perhaps these experiences are associated with positive,

longer-term emotional responses, as the user begins to foster a long-term relationship

with the product.

In spite of the growing adoption of wearable devices, there is a lack of research on

user interface design solutions to enable successful multi-tasking without information

overload. Information can be classified into several categories such as text information,

picture information, and sound information. Beyond differences in physical dimensions,

resolution, (color) contrast, and luminance, small screen devices differ in the display size

and the number of menu items displayed at a time on the screen (Zeifle, M., 2010). Some

devices show as many functions and some display images with one menu item per

screen. In the case of text information past research highlights the importance of

information being presented in the right place at the right time (Abhyankar et al., 2013;

Ganapathy, S., Anderson, G. J., & Kozintsev, I. V., 2011). Information presentation has

been used to study complex tasks decision making (Speier, C., 2006) in mobile phone

Page 29: Assessing Effectiveness of Information Presentation Using

20

form factor (Ganapathy, S., Anderson, G. J., & Kozintsev, I. V., 2011). There is a

relationship between information presentation format and decision making that is

moderated by the complexity of the task. Technological intervention that supports data

transfer, in this case sending vitals from the patients all throughout the transfer of care,

would be a great benefit to the trauma care department. This would help the medical

responder, in the trauma care center, to prepare the necessary treatment materials like

medicine, diagnostic procedures, bringing in specialized physicians, obtaining consult

from experienced physicians, and calling in support staff if required.

1.4 Visual Search

A typical visual search involves observers presented with a display containing a

number of items. The observers are set to find the target among different distractors. The

number of items (set size) varies from trial to trial. Experimenters measure the reaction

time (RT), the amount of time that is required to make a “target-present” or “target-

absent” response. Information presented in the wearable augmented reality devices

involves activities such as browsing, text messaging, route navigation, reading and

gaming. All these activities involve visual search that helps the user find the information

they require.

Hasegawa, S., Miyao, M., Matsunuma, S., Fujikake, K., & Omori, M. (2008)

found by subjective evaluation that increasing character sizes (2.5 mm, 2mm and 1 mm in

height) resulted in an increase in legibility in computer screen and there was no

significant difference in search speed for the different character sizes. Schaik, P. V., &

Ling, J. (2001) studied the effects of background contrast on visual search performance in

web pages and mobile devices, and they did not find any significant difference in

Page 30: Assessing Effectiveness of Information Presentation Using

21

performance. To measure mobile user information processing abilities while walking, a

conventional serial visual search paradigm was used. Participants were instructed to

search for a target (“T” shape) among distractors (“L” shapes) in different rotated

orientations. They reported that the presence vs. absence of an irrelevant color singleton

distractor in a visual search task was not only associated with activity in superior parietal

cortex, in line with attentional capture, but was also associated with frontal cortex activity

(Eglin, M., Robertson, L. C., & Knight, R. T., 1991). The presence of task-relevant

features in a given location led to a change in ERP/ERMF activity beginning 140 msec

after stimulus onset, with a neural activity in the ventral occipito-temporal cortex. This

effect was independent of the location of the actual target. (Hopf et al., 2004). Research

by Beck, D. M., Muggleton, N., Walsh, V., & Lavie, N., 2006 shows that parietal cortex

is associated with change detection. Parietal activity, left lateral precentral gyrus of the

frontal cortex. (Sarter, M., Givens, B., & Bruno, J. P., 2001) activation of frontal and

parietal cortical areas, mostly in the right hemisphere, are associated with sustained

attention performance. In the model, visual scanning is guided by four factors: Salience,

Effort, Expectancy and Value. Salience and effort are the main bottom-up factors of

visual attention (Tatler, B. W., Hayhoe, M. M., Land, M. F., & Ballard, D. H., 2011).

several factors have been identified, such as color, shape or motion (Itti & Koch, 2001).

Effort corresponds to the visual angle between different pieces of information. If his

distance, if it is too far, can inhibit the intake of information (Kvalseth, 1977; Sheridan,

197.

Page 31: Assessing Effectiveness of Information Presentation Using

22

1.5 Electroencephalography

Electroencephalography (EEG) is a method by which electrical activity of the

brain is measured. Electrodes are placed in the scalp and electrical activity is recorded

using typically non-invasive electrodes. EEG is used for diagnosing brain abnormalities

like epilepsy, sleep disorders, coma and brain death. EEG, and the related study

of ERPs (Event Related Potentials) are used extensively in neuroscience, cognitive

psychology, neurolinguistics and psychophysiological research (M. Teplan, 2002). The

brain processes signal for movement, emotions, behavior, analysis and consciousness.

When the neurons in the brain are triggered, local current flows (weak electrical signal)

are produced which the EEG device can measure and is amplified for interpretation (M.

Teplan, 2002). The difference in electrical potential is caused by postsynaptic graded

potentials from pyramidal cells that creates electrical dipoles between soma (body of

neuron) and apical dendrites/neural branches (M. Teplan, 2002). The brain waves are

sinusoidal and range from 0.5 to 100 µV in amplitude. Brain waves have been

categorized into four basic groups - beta (>13 Hz), - alpha (8-13 Hz), - theta (4-8 Hz), -

delta (0.5-4 Hz) (Teplan, M., 2002). Beta rhythm has been shown to increase with

attention and vigilance in general (Steriade, M., 2005). Further, gamma has been found to

be involved in a host of other cognitive processes: attention, arousal, object recognition,

and top-down modulation of sensory processes (Engel, A. K., Fries, P., & Singer, W.,

2001).

Page 32: Assessing Effectiveness of Information Presentation Using

23

Figure 6: Emotiv Epoc

As mentioned in section 1.1, critical time constraints of a trauma care provider

forces the current study to be easily learnable and quick setup time. The measurement of

brain signal in this experiment, Emotiv EPOC ®, was used as shown in Figure 6. Unlike

conventional laboratory grade EEG devices, Emotiv EPOC ® is wireless, flexible design

and easy to fit making it convenient for the participants to wear without compromising

set up time. Using this device, researchers can detect facial movements, emotional states,

and imagined motor movement. A number of researchers have used the Emotiv EPOC ®

EEG recordings for assessment of cognitive processes. Researchers have investigated

different EEG processing algorithms to assess classification of shapes being thought

about (Esfahani, E. T., & Sundararajan, V., 2012) and classification of positive and

negative emotion elicited by pictures Pham, T. D., & Tran, D. (2012), and evaluation of

cognitive workload (Anderson et al., 2011). The psychophysiological signal is

Page 33: Assessing Effectiveness of Information Presentation Using

24

continuously available, whereas behavioral or self-report data may be detached from the

user experience (Allanson, J., & Fairclough, S. H. 2004).

However, there is limited research conducted on how to present complex and

dynamic information at a glance to aid human decision-making in time critical scenarios

such as trauma and surgical care. This research attempts to address the effectiveness of

information presentation using wearable augmented display device for emergency

response environment.

Page 34: Assessing Effectiveness of Information Presentation Using

25

2. RESEARCH OBJECTIVES

The primary goal of this research is to a) explore how wearable augmented reality

devices, such as Google Glass™, can improve human decision making during transfer of

care and b) understand the design of information presentation on the wearable augmented

reality device to improve user experience, reduce cognitive workload and aid decision

making. Therefore, a series of research questions were developed for this study. The

following table (Table 1) lists the research questions and associated hypothesis.

Table 1: Hypothesis Related to Research Questions

Research Question Hypothesis

1. Do User Interface elements

such as object size, color and

target location influence visual

search in augmented wearable

display devices?

Smaller object size decreases

visual search performance on the

augmented wearable display

device.

- Ho: Visual search performance and response

time for small size is not significantly

different from large size

- H1: Large object size has better visual search

performance and faster response times than

small object size.

Monochromatic search is - Ho: Visual search performance and response

Page 35: Assessing Effectiveness of Information Presentation Using

26

significantly different than

polychromatic search.

time for monochromatic search is not

significantly different from that of

polychromatic search.

- H1: Visual search performance and response

time for monochromatic search is

significantly different from that of

polychromatic search.

Targets in the inner and outer area

of the display have significantly

different effects on the search

performance.

- Ho: Visual search performance and response

time for targets present in the inner area are

not significantly different from that of the

outer area.

- H1: Visual search performance and response

time for targets present in the inner area are

significantly different from that of the outer

area.

Target locations on the right and

left half of the screen have

significantly different effects on

the search performance.

- Ho: Visual search performance and response

time for targets present in the right half of

the screen are not significantly different

from that of the left half of the screen.

- H1: Visual search performance and response

time for targets present in the right half of

the screen are significantly different from

Page 36: Assessing Effectiveness of Information Presentation Using

27

that of the left half of the screen.

2. Does the screen layout affect

the response time?

Number of UI elements in the

screen has significant effects

on the user experience

- Ho: The response time and performance for

the three screen layouts (UI1, UI2 and UI3)

are not significantly different from each

other.

- H1: The response time and performance of at

least one of the three screen layouts is

significantly different.

3. Does the frequency of data

visualization in a small screen

device affect the response

time?

Frequency of data visualization

has significant effect on user

experience.

- Ho: The response time and performance for

frequency of 2 secs is not significantly

different from that of the frequency of 6 sec.

- H1: The response time and performance for

frequency of 2 secs is significantly different

from that of the frequency of 6 sec.

4. Is there a significant difference

in response time between

novice and expert trauma

surgeons?

Experts have faster response

time than novice

- Ho: Response time and performance for

experts is not significantly different from

that of novice.

- H1: Response time is faster and performance

is better for experts compared to novice.

Page 37: Assessing Effectiveness of Information Presentation Using

28

The next section provides an overview of the research approach that was used to

investigate the research questions and provide empirical data to support the hypotheses.

3. METHODOLOGY

3.1 Design of Experiment

An empirical study was conducted to determine the effect of information

presentation of patient vitals on wearable augmented display for improved transfer-of-

care during emergency response. To obtain the most reliable consensus from the

emergency department clinical expert in trauma and critical care was obtained. The user

interface and interaction design involved iterative design process involving the Subject

Matter Expert (SME). Each iteration consisted of design evaluation and questions

centered around the research questions. The questions asked made sure that the expert’s

reasoning was relevant and not deviating from the primary vision of this study. User

study was done on the trauma care expert to observe their tasks (and intensity of tasks),

use of technology for work. The experiment was designed to be tested on a Google

Glass™ as the wearable device. The pool of participants – 6 male and 6 female included

physicians and residents from the Department of Trauma and Surgery, Boonshoft School

of Medicine, Miami Valley Hospital, Wright State University, Dayton. Six residents

(three junior and three senior) participated as novice and six physicians as residents. The

experiment was divided into two parts –patient vitals simulation task and visual search

task.

Page 38: Assessing Effectiveness of Information Presentation Using

29

3.1.1 Patient Vitals Simulation

Patient Vitals Simulation included testing participants on multi-tasking and

viewing streaming patient vitals data and decision- making. The participants were asked

to take ATLS (Advanced Trauma Life Support) as the secondary task. Figure 7, shows

the experiment setup as the participant was viewing the stimuli. EEG data was collected

through the Emotiv EPOC ® device to understand the brain response to visual search task

as the stimuli was presented.

Figure 7: Participant taking ATLS test during the patient vitals application simulation

Page 39: Assessing Effectiveness of Information Presentation Using

30

3.1.2 Visual Search Task

Visual search task included testing participants on information presentation for a

visual search task for addressing the research question related to how we can design

information presentation on the wearable augmented reality device improve user

experience, reduce cognitive workload and aid decision making. Figure 8 shows the

participant taking the ATLS test and monitoring incoming patient data. The participants

are tested for multitasking in this experiment.

Figure 8: Participant performing visual search task

3.2 System Description

For this study a wireless head mounted augmented reality device- Google Glass™

has been used. The device is a wearable mobile computing device with Bluetooth

connectivity to internet-ready devices. Google Glass™ has an optical head-mounted

Page 40: Assessing Effectiveness of Information Presentation Using

31

display, resembling eyeglasses; it displays information in a Smartphone-like manner, but

provides hands-free format that is controlled via voice commands and touch. The Google

Glass™ display of 640 x 360 pixels rest above the line of sight such that the user’s vision

is not interrupted. The device comes with a storage of 16GB and 1GB RAM of memory.

Applications for the device are developed on Android version 4.4. The device includes

the following features: real time hands free notification, hands-free visual and audio

instructions, instant connectivity access, instant photography/videography, augmented

reality. The wearable camera is also a helpful educational tool when put on a patient or

mannequin because it allows residents to view their bedside manner from the patient’s

perspective. Internet connectivity helps in real time patient monitoring.

3.2.1 Patient Vitals Simulation

The patient vitals system was designed based on Android Google Glass™ design

guidelines. The user interface design elements in the patient vitals simulation was

developed after assessing and prioritizing the triage information by observing various

patient monitoring tools in the trauma care department. Triage information was verified

and evaluated by experts and the software was developed using Android Studio for

Android version 19. The system design for the mobile interface was based on a flat

navigation hierarchy with three levels of display as shown in Figure 9. The Navigation

Design includes three different levels of information presentation.

a) Home screen: Home screen is the first screen users will see when launching the

application.

Page 41: Assessing Effectiveness of Information Presentation Using

32

b) Menu Page: Section Page is the second level of the app and represents the various

applications in the device. Users need to select on Patient Vitals app or the Visual

Search task app depending on the experiment.

c) App page: Detail Pages are the third level of the Application. In the case of

patient vitals application, the details of each patient were presented. If any

component has active criticality, the color of the component tile will change to

yellow or red; yellow indicating low criticality and red indicating high criticality.

The visual search task included presenting the first slide and the user navigating

to the next screen by swiping or tapping.

Page 42: Assessing Effectiveness of Information Presentation Using

33

Figure 9: Application Navigation of Patient vitals simulation task

Page 43: Assessing Effectiveness of Information Presentation Using

34

3.2.2 Visual Search Task

For the visual search task, the screen was partitioned into 4x6 matrix and

target/distractors were placed in these positions (P1 through P24). In figure 10, darker

shade represents inner area and the lighter shade represents outer area of the screen. The

participant has to find the target “T” (which are in two horizontal orientations) among the

distractors “L” (which were presented in four possible types of orientations).

P1 P2 P3 P4 P5 P6

P7 P8 P9 P10 P11 P12

P13 P14 P15 P16 P17 P18

P19 P20 P21 P22 P23 P24

Figure 10: Participant performing visual search task

Figure 11: Participant performing visual search task

Page 44: Assessing Effectiveness of Information Presentation Using

35

3.3 Testing Procedures

Expert was asked about the scenarios. Each participant was asked if they used

Google Glass™ and their familiarity with smart phones. Participants were introduced to

the Google Glass™ device and were trained on the different gestures, which could be

used to operate it, navigate between and within the applications. The training modules

were untimed sessions and participants were encouraged to practice if they wanted until

they were familiar with the system. Familiarity was based on a subjective measurement

of the participant’s level of comfort in interacting with the interface and successful

completion of a scenario like the testing scenarios.

3.3.1 Patient Vitals Simulation

Patient Vitals simulation was a repeated measures design, with two within-

subject’s independent variables: type of User Interface (UI1 vs UI2 vs UI3) and

frequency of data visualization (2 seconds vs. 6 seconds). The experiment was

counterbalanced using Latin square with respect to the order of scenarios being tested and

the type of system. Twelve different scenarios were tested to collect the appropriate

metrics across the three different UIs and the two data visualization frequencies. All the

scenarios involved monitoring the vital signs and user responses. All scenarios were

presented with a summary for 8 seconds and patient vitals for 30 seconds. The scenarios

were developed from observing emergency scenarios in Miami Valley hospital, Dayton

and were evaluated by subject matter experts.

Page 45: Assessing Effectiveness of Information Presentation Using

36

UI 1 consists of Patient ID at the top of the screen, below this the screen was

divided into two halves, the left half contains the summary and the right half contains the

three most important vital signs for the physician’s evaluation

Figure 12: Screen layout of user interface 1

UI 2 consists of Patient ID at the top of the screen followed by a summary, below

this the screen is divided into two halves, the four most vital (Heart rate, BP, temperature

and RR) patient information are presented in this area in a 2x2 matrix form.

Figure 13: Screen layout of user interface 2

UI 3 consists of Patient ID at the top of the screen, below this the screen is divided into

two halves, the five most vital patient information (Heart rate, BP, spO2, temperature and

RR) with age are presented in this area in a 3x2 matrix form.

Page 46: Assessing Effectiveness of Information Presentation Using

37

Figure 14: Screen layout of user interface 3

3.3.2 Visual Search Task

Visual Search Task was a repeated measures design, with four within-subjects’

independent variables: target and distractor color (Monochromatic vs Polychromatic),

size of the font (Large vs Small), position of the target (Right half vs Left half of the

screen) and area in which the target is present (Inner vs Outer area). The Google Glass™

screen displayed the target “T” shape in either of the two orientations; the top of the “T”

shape faced either right or left. There were multiple “L” shapes as distractors in four

different orientations; the top of the “L” shapes faced top, right, bottom, and left. Every

slide had one target and 23 distractors in a 4 by 6 grid screen. Figure 16 shows the four

different types of screens presented to the participants; polychromatic with small font,

polychromatic with large font, monochromatic with large font, monochromatic with

small font. Emotiv EPOC ®, the consumer grade wireless EEG device has 14 channels

(AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, AF42) and two reference

channels (P3, P4 locations). It uses wet saline based sensors with 14-bit resolution. It has

Page 47: Assessing Effectiveness of Information Presentation Using

38

a sampling rate of 128 Hz and uses sequential single ADC sampling method. Emotiv

EPOC ® has a bandwidth of 0.2-43Hz. Emotiv SDK is used to view the signal in real

time and also check the quality of sensor-scalp contact.

Figure 15: Emotiv Epoc terminals

Figure 16: Types of screen layout for the visual search task with varying size, color, and target location: polychromatic small (Top left), polychromatic large (Top right), monochromatic large (Bottom left), monochromatic

small (Bottom right)

Page 48: Assessing Effectiveness of Information Presentation Using

39

3.4 Dependent Measures and Analysis

In order to evaluate the performance of the system several measures such as

cognitive workload, ease of use, and performance times were collected. Prior to

conducting the experiment, the users were interviewed on their familiarity with mobile

phones and wearable technology.

3.4.1 Patient Vitals Simulations

NASA TLX was used to measure the cognitive workload of the participants when

performing a task and is an aggregate of six subscales: mental demand, physical demand,

temporal demand, performance, effort and frustration. Among mental workload

measurement techniques based on self-report NASA-TLX method is used most

commonly (Hart, S. G., 2006). In this method, each sub-scale takes a value between 0-

100 and at the end of the measurement 15 questions for the binary comparison of these

subscales are asked to the participants. A weighted calculation method is then used to

extract the total workload. The sub-scales impact the total workload and provides a

multidimensional view of NASA-TLX. The structure, target components and timing of

tasks are the main components affecting the mental workload (Hart, S. G., 2006). The

user’s mental workload is affected by numerous external factors like environmental

conditions, user’s ability, system and operator errors, behavior pattern which means it

does not give us a constant value even for the same tasks and users and mental workload

rates change (Hart, S. G., 2006). For NASA-TLX weighted rating system to obtain results

for different tasks can be completed in less than 2 minutes. This shows that as a multi-

Page 49: Assessing Effectiveness of Information Presentation Using

40

dimensional rating method, NASA-TLX can be used more efficiently with heavily

occupied subjects as in this experiment. NASA-TLX questionnaire used in this study,

consists of two different question groups. 6 mental workload scales, definitions of these

scales and expected work from the experts within the scope of each task are included in

the first part of the questionnaire. In the second part of questionnaire, binary comparison

of these 6 scales is requested from the experts to calculate the weight of each scale.

Ease of use was measured using System Usability Scale (SUS) score citation.

SUS provides a quick reliable tool to measure usability and learnability. The SUS

instrument developed by Brooke, 1996 provides a single reference score for participants’

view on the product’s usability. It consists of 10 statements and based on the users’

agreement are scored on a 5-point scale. The final score will range from 0 to 100. The

scores reflect the reliability of the product; higher the score means the product is more

reliable. Brooke (1996) cautioned that “scores for individual items are not meaningful on

their own.” Suggests that SUS scores of above 90 can be considered as superior products

and less than 70 can be considered for further scrutiny and improvement. Products with

less than 50 can be considered for serious improvement. The users have to score carefully

as the statements oscillate between positive and negative.

SUS was followed by a general questionnaire about the performance index of the

device, application and the user interface. A general questionnaire was used to evaluate

the user interface design elements and the response time was collected using a stop clock

to measure the time taken by the participant to find the target in a particular slide. The

response time for visual search task was calculated using the time difference between two

taps/slides and for the Patient Vitals simulation experiment is the time taken by the

Page 50: Assessing Effectiveness of Information Presentation Using

41

participant to react to the scenario presented to them. The general questionnaire was

divided into two categories; perceived usefulness of Google Glass™ in trauma care

environment and user experience.

ATLS test response was collected to evaluate the multitasking ability while using

the augmented wearable device for the Patient Vitals simulation experiment. ATLS is a

training program for medical providers in the management of acute trauma cases,

developed by the American College of Surgeons which is a common knowledge amongst

the pool of users in the current study.

3.4.2 Visual Search Task

The perception of the user was used to compare with the brain signal obtained

from the EEG. EEGLAB was used for signal processing and EEG data analysis.

EEGLAB is an open source interactive MATLAB toolbox used to process signals

obtained from Emotiv EPOC ®. In this experiment it was used for artifact rejection,

standard averaging of channel waveforms and power spectrum. For this experiment,

artifact rejection was done by two steps. First the channel waveforms were decomposed

using Independent Component Analysis (ICA) which rejects large artifacts. EEGLAB

does the ICA depending on the EEG device used taking number of channels and

frequency range into consideration. Artifacts include clenching of jaw, eye movement,

electrode disconnection, potential related to cardial activity. Secondly the data was

scanned visually for clearly ‘bad’ data epochs and rejected, then used for further

interpretation. To determine differences in variables (nutrient concentrations, salinity,

accretion, etc.) among sites, one-way analysis of variance analysis (ANOVA, α = 0.05)

Page 51: Assessing Effectiveness of Information Presentation Using

42

was conducted using JMP 13 statistical software (SAS Institute Inc., Cary, NC, USA,

1999).

4. RESULTS

The following analyses were conducted to identify implications towards the

effects of information complexity and mental workload on trauma care

providers/surgeons during emergency response scenarios using Google Glass™. The two

experiments visual search task and the patient vitals simulation was conducted

independently. Visual search task was conducted and User Interface elements such as

object size, color, and target location were tested for their influence on visual search. This

test also used EEG information to detect the brain areas active during target search. In the

patient vitals simulation, the participants were presented with different UI screens and

their experience was evaluated. This test was also used to see the effect of scenario

response time. The UI screens were presented in different frequencies, which would help

imply about efficiency in data presentation. The participants were conducted across two

groups, experienced doctors as experts and resident students as the novice. These groups

were tested with their response time and experience with the augmented reality device.

Results indicate that there was significant difference in the response time for

doctors and residents (F (5,141), p-value < 0.001, ηp2= 0.031). There was no significant

difference in response time for the different user interfaces and there was no interaction

effect. Mean response time and standard deviation were 12.027 sec and 3.406 sec for

doctors and 14.43 sec and 4.949 sec for residents as seen in Figure 17. The mean

response times with respect to the user interface were 13.6 sec for UI1, 13.31 sec for UI2

Page 52: Assessing Effectiveness of Information Presentation Using

43

and 12.77 sec for UI3 with standard deviation of 4.59 sec, 4.34 sec and 4.31 sec

respectively. When residents were further analyzed based on their experience, the

response time was significantly different for Junior residents when compared to Senior

residents and doctors (F (2,141), p-value < 0.001, ηp2= 0.211) Mean response time and

standard deviation were 12.027 sec and 3.406 sec for doctors, 16.722 sec and 4.79 sec for

junior residents and 12.139 sec and 3.994 sec for senior residents.

Figure 17: Average response time with respect to experience

Analyzing the number of questions answered in the ATLS test, we found that there was

no significant difference between doctors and residents in the number of questions

answered. The mean number of questions answered were 4.5 by doctors, 3.67 by senior

residents and 1 by junior residents.

12.027 12.13

16.68

0

2

4

6

8

10

12

14

16

18

Doctor Senior Resident Junior Resident

Re

spo

nse

tim

e (

sec)

Experience

Average Response Time across Experience

Page 53: Assessing Effectiveness of Information Presentation Using

44

There was no significant difference in response time (F (15, 1132), p-value > 0.9221,

ηp2= 0.00713) for the UI elements (Figure 18); color, size, left/right half of the screen and

inner/ outer area of the screen, and there was no interaction effect.

Figure 18: Response time for different UI elements

Figure 19 shows the difference in brain signal amplitude averaged for doctors and

residents in terms of micro volts. The table shows comparison of these micro volt values

against the respective channels. The amplitude peaked for residents in the O2, T8, FC6

and F8 channels.

1.96

1.965

1.97

1.975

1.98

1.985

1.99

1.995

Tim

e (s

ec)

UI elements

Response time for UI elements

Page 54: Assessing Effectiveness of Information Presentation Using

45

Figure 19: Average electric data in micro volts against each EEG channel

The EEG heat map, Figure 20 shows activity in the brain color coded ranging from red to

blue, where the area marked in red is where the brain was most active and the area

marked in blue is where it was least active. The figure shows that there were two areas of

the brain that were most active for the visual search task. Figure 20 shows the brain

activity of a participant whose temporal region of the brain was active. Figure 21 shows

another participant’s heat map where there was more activity in temporal, the superior

parietal and pre-frontal cortex of the brain. The temporal region is active for visual,

auditory signals and language processing. The superior parietal lobule is associated with

spatial orientation in tandem with the visual sensory information. The prefrontal cortex is

known to influence planning complex cognitive behavior, decision making and

moderating social behavior.

1(AF3)

2(F7)

3(F3)

4(FC5)

5(T7)

6(P7)

7(O1)

8(O2)

9(P8)

10(T8)

11(FC6)

12(F12)

13(F8)

14(AF4)

Doctors -7.63 -13.1 -11.8 3.01 0.232 -5.19 -5.19 -1.56 -4.08 -10 -6.31 -9.97 -6.5 -6.11

Residents -17.7 -18.4 -0.27 -16 -18 -30.3 -24 -55.8 -2.3 39.3 -5.56 23.07 -38.4 -12.7

-80

-60

-40

-20

0

20

40

60M

ICR

O V

OLT

S

CHANNELS

EEG DATA

Doctors Residents

Page 55: Assessing Effectiveness of Information Presentation Using

46

Figure 20: Heat map showing activity in the temporal and superior parietal cortex

Figure 21: Heat map showing activity in the temporal, superior parietal and pre frontal cortex region

User response for color, size, left/right position and inner/outer area of the screen

did not have significant effect on the response time but experience had significant effect

on the response time and there was no interaction effect (F (31,1131), p-value > 0.69,

ηp2= 0.023). The mean response time for Doctors was 1.88 seconds per slide and for

Page 56: Assessing Effectiveness of Information Presentation Using

47

Residents was 2.08 seconds per slide with a standard deviation of 0.96 and 1.04

respectively.

There was no significant difference in the NASA TLX score (F (1, 8), p-value > 0.478,

ηp2= 0.087). The mean score given by Doctors was 42.93 and for Residents was 51.26

with standard deviation of 15.87 and 14.67 respectively as shown in figure 22.

Figure 22: NASA TLX score across experience

Figure 23: User response for preference of UI elements

10

20

30

40

50

60

70

Doctor Resident

NA

SA T

LX s

core

Experience

NASA-TLX Score

0

2

4

6

8

10

12

Color -Monochromatic/

Polychromatic

Size - Large/Small Position - Right/Left Area - Inner/Outer

Nu

mb

er o

f re

spo

nse

s

UI design elements

Page 57: Assessing Effectiveness of Information Presentation Using

48

Figure 23 shows user preference response for color, size, left/right position, inner/outer

area of the screen. Most participants (9 out of 11) preferred monochromatic when

compared to polychromatic. 8 out of the 11 participants preferred target in the outer area.

The System Usability Scale (SUS) results showed that there was no significant

difference between the three User Interfaces as shown in figure 24. When compared

between doctors and residents, there was no significant difference for UI1 and UI3,

whereas for UI2 the SUS for junior residents was significantly lesser (F (2, 12), p-value =

0.0203, ηp2= 0.579) with mean and standard deviation values of 61.667 and 10.104 than

senior residents and doctors with mean and standard deviation values of 77.5 and 4.33

and 72.5 and 3.16 respectively.

Figure 24: SUS scores for the 3 UIs

The user response for the questionnaire was analyzed and table 2 show the average

response for the questions in a scale of 1 through 5; where 1 is strongly disagree, 2 is

disagree, 3 is neutral, 4 is agree and 5 is strongly agree.

73.33

71.04

74.375

69

70

71

72

73

74

75

UI1 UI2 UI3

Mean SUS score

Page 58: Assessing Effectiveness of Information Presentation Using

49

Table 2: User response for device

# Question Response

1 Device Comfort 2.91

2 Application load time for the device 2.167

3 The use of heads-up display to improve patient

monitoring and decision making

3.08

4 Device usefulness in trauma pre-hospital care 3.00

Table 3: User response for Patient Vitals application

# Question Response

1 Navigation through the application 3.667

2 Clarity and understandability of the application 3.833

3 Flexibility of the application 3.333

4 Application ease of use 3.75

5 Learnability of the application 4.167

6 Ability to accomplish task with the help of this

application

3.41

7 Organization of information in the screen 3.75

8 Appropriate Content in the application 3.667

Page 59: Assessing Effectiveness of Information Presentation Using

50

Table 4: User response for User Interface

# Question Response

1 Optimum number of elements within the screen

across the 3 UIs

UI-1 UI-2 UI-3

3.58 3.00 3.167

2 Design of screen layout across the 3 UIs 3.83 2.91 3.33

Table 2 shows that the response for device comfort, potential use of the heads-up displays

to improve patient monitoring and decision making and the device usefulness in trauma

pre-hospital care was around the ‘neutral’ ranging from 2.91 to 3.08 whereas the

application load time was in the disagree with a value of 2.167. Table 3 summarizes the

questionnaire for Patient Vitals simulation application. For application navigation, clarity

and understandability, ease of use, learnability, information organization and presenting

the appropriate content was in the ‘agree’ ranging from 3.667 to 4.167. Whereas for

flexibility of the application and the ability to accomplish task with the help of this

application the participants scored in the ‘neutral’ with values 3.33 and 3.41 respectively.

Table 3 shows the user response for the 3 UIs. For optimum number of elements and the

screen layout the score was UI1>UI3>UI2.

Page 60: Assessing Effectiveness of Information Presentation Using

51

5. CONCLUSION

In this study, the participants performed scenario evaluation task (simulating real

world trauma scenarios). For the multitasking component, they were also tested on ATLS

at the same time. Upon analyzing the patient vitals simulation data, it was found that

junior residents took a longer time to respond to the scenarios when compared to senior

residents and doctors.

The user response shows that UI1 was comparatively better in the design

of screen layout and content on the screen than UI3 and UI2. The difference in the screen

layout between UI1 and UI2 was the patient summary presentation which shows that the

participants preferred the summary above the vital signs over no summary at all. The

difference in the number of elements (Heart rate, blood pressure, temperature, etc.) in the

screen is that UI1 has 4 and UI2 has 3. So, participants recognize that in the same screen

space, when summary was presented UI1 had more information presented on the screen

and it did not affect the attention levels compared to UI2.

The questionnaire results show that participants experienced discomfort using the

device and the application took less time to load. The users felt discomfort due to the heat

generated when the Google Glass™ was used for a long time (greater than 30minutes);

which included the learning phase before the experiment began.

For the visual search task, it was found that there was no significant difference in

size, color, right/left position and inner/outer area of the screen but the response time was

significantly less for doctors than residents. However, the questionnaire results showed

that monochromatic search was easier than polychromatic search, and additionally the

Page 61: Assessing Effectiveness of Information Presentation Using

52

targets in the outer area were easier to find when compared to targets in the inner area.

This showed that participants visually scanned the outer area first and then the inner area

which contradicts the research done by Lim et al., 2012.

The EEG data showed that there was more activity in the T8 channel area which

included the temporal region as well as the temporal-parietal area and parietal area. Past

research (Beck, D. M., Muggleton, N., Walsh, V., & Lavie, N., 2006) shows that the

superior parietal lobe was associated with visual search. Participants brain power heat

map showed more activity in the T8, F12, F8 and O2 channel. Additionally, we also see

activity in the prefrontal region which shows that the participants tried to focus during

visual search. The signal does not show prolonged stress for the 4 minutes of the usage of

the experiment.

Page 62: Assessing Effectiveness of Information Presentation Using

53

6. IMPLICATIONS

Research in the field of trauma care using wearable technology needs depth. The

research community has been providing technological assistance for healthcare providers

(As mentioned in Section 1.1) but there is less emphasis on cognitive evaluation of

human-computer interaction. Trauma care physicians undergo cognitive stress and

researchers are developing technology to help them work more effectively. This study

analyzed cognitive stress by using EEG and compared with the perceived cognition using

NASA-TLX. By doing so this approach shows more insight into the trauma care

physician’s cognition. From the results, we can imply that wearable augmented display

device can enhance visualization for emergency response without additional mental

workload and aid in decision making. Wearable augmented device provides ubiquitous

information especially in multitasking scenarios where user can have access to

information on an “as needed” basis. The mean channel data shows that for residents the

prefrontal area was active and all participants had temporal cortex active. This shows that

the participants were not under high stress or in other words we can say that wearable

augmented reality devices can aid human decision making during transfer of care.

understand the design of information presentation on the wearable augmented reality

device to improve user experience and reduce cognitive workload. This study showed

that there is no significant difference in the different screen layouts which can help design

adaptive UI for emergency response. Adaptive UI shows relevant information based on

the needs and creates less confusion to new users.

Page 63: Assessing Effectiveness of Information Presentation Using

54

7. FUTURE WORK

In this study, the different UI did not have any significant difference in response

time, this can be utilized to develop adaptive UI system where the layout changes per the

needs of the doctor. Wearable augmented display devices with other form factors (large

display augmented reality devices – Microsoft HoloLens, Meta 1, etc.) can be tested and

other wearable devices like smartwatch can be tested for the same application. Expert

participants pointed out that trending patient vitals data could improve experience which

can be achieved through large screen AR devices. Using NASA-TLX for the user’s

perceived cognition and at the same time comparing it with brain signals give research

insight and help developers in designing future products. So future devices should use

these evaluation techniques to move towards better usability.

Page 64: Assessing Effectiveness of Information Presentation Using

55

8. REFERENCES

1. Abhyankar, P., Volk, R. J., Blumenthal-Barby, J., Bravo, P., Buchholz, A.,

Ozanne, E., Vidal, C. D., Col, N. & Stalmeier, P. (2013). Balancing the

presentation of information and options in patient decision aids: an updated

review. BMC medical informatics and decision making, 13(Suppl 2), S6.

2. Allanson, J., & Fairclough, S. H. (2004). A research agenda for physiological

computing. Interacting with computers, 16(5), 857-878.

3. American College of Surgeons (2016). Consultation/Verification Program,

Reference Guide of Suggested Classification. American College of Surgeons

(Version 10.2 pp. 3)

4. Anderson, E. W., Potter, K. C., Matzen, L. E., Shepherd, J. F., Preston, G. A., &

Silva, C. T. (2011, June). A user study of visualization effectiveness using EEG

and cognitive load. In Computer Graphics Forum (Vol. 30, No. 3, pp. 791-800).

Blackwell Publishing Ltd.

5. Andersson, T., & Värbrand, P. (2007). Decision support tools for ambulance

dispatch and relocation. Journal of the Operational Research Society, 58(2), 195-

201.

6. Atac, R. (2012, May). Scorpion HMCS developmental and operational flight test

status and results. In SPIE Defense, Security, and Sensing (pp. 838303-838303).

International Society for Optics and Photonics.

Page 65: Assessing Effectiveness of Information Presentation Using

56

7. Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S., & MacIntyre, B.

(2001). Recent advances in augmented reality. IEEE computer graphics and

applications, 21(6), 34-47.

8. Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An empirical evaluation of the

system usability scale. Intl. Journal of Human–Computer Interaction,24(6), 574-

594.

9. Beck, D. M., Muggleton, N., Walsh, V., & Lavie, N. (2006). Right parietal cortex

plays a critical role in change blindness. Cerebral Cortex, 16(5), 712-717.

10. Besedeš, T., Deck, C., Sarangi, S., & Shor, M. (2015). Reducing choice overload

without reducing choices. Review of Economics and Statistics, 97(4), 793-802.

11. Bloom, M. B., Salzberg, A. D., & Krummel, T. M. (2002). Advanced technology

in surgery. Current problems in surgery, 39(8), 745-830.

12. Bost, N., Crilly, J., Patterson, E., & Chaboyer, W. (2012). Clinical handover of

patients arriving by ambulance to a hospital emergency department: a qualitative

study. International Emergency Nursing, 20(3), 133-141.

13. Bower, M., Howe, C., McCredie, N., Robinson, A., & Grover, D. (2014).

Augmented Reality in education–cases, places and potentials. Educational Media

International, 51(1), 1-15

14. Brooke, J. (1996). SUS-A quick and dirty usability scale. Usability evaluation in

industry, 189(194), 4-7.

15. Buschman, T. J., & Miller, E. K. (2007). Top-down versus bottom-up control of

attention in the prefrontal and posterior parietal cortices. science,315(5820), 1860-

1862.

Page 66: Assessing Effectiveness of Information Presentation Using

57

16. Canadian Association of Emergency Physicians (CAEP). (2002). The future of

emergency medicine in Canada: Submission from CAEP to the Romanow

Commission. Part 2. CJEM, 4(6), 431–438.

17. Caplin, A., & Dean, M. (2011). Search, choice, and revealed preference.

Theoretical Economics, 6(1), 19-48.

18. Carr, B. G., Caplan, J. M., Pryor, J. P., & Branas, C. C. (2006). A meta-analysis

of prehospital care times for trauma. Prehospital Emergency Care, 10(2), 198-

206.

19. Casey, C. J. (1999, July). Helmet-mounted displays on the modern battlefield.

In AeroSense'99 (pp. 270-277). International Society for Optics and Photonics.

20. Casey, C. J., & Melzer, J. E. (1991, August). Part-task training with a helmet-

integrated display simulator system. In Medical Imaging'91, San Jose, CA (pp.

175-178). International Society for Optics and Photonics.

21. Chow, E. H., Thadani, D. R., Wong, E. Y., & Pegrum, M. (2015, July). Mobile

Technologies and Augmented Reality: Early Experiences in Helping Students

Learn About Academic Integrity and Ethics. International Journal of Humanities

Social Sciences and Education (IJHSSE), 2(7), 112-120.

22. Context Surgery, Inc. Context Surgery - Innovative Surgical Support System

Retrieved from: http://www.contextsurgery.com/.

23. Delorme, A., & Makeig, S. (2004). EEGLAB: An open source toolbox for

analysis of single-trial EEG dynamics including independent component analysis.

Journal of Neuroscience Methods, 134(1), 9-21.

Page 67: Assessing Effectiveness of Information Presentation Using

58

24. Diaz, M. A., Hendey, G. W., & Bivins, H. G. (2005). When is the helicopter

faster? A comparison of helicopter and ground ambulance transport times.

Journal of Trauma and Acute Care Surgery, 58(1), 148-153.

25. Eglin, M., Robertson, L. C., & Knight, R. T. (1991). Cortical substrates

supporting visual search in humans. Cerebral Cortex, 1(3), 262-272.

26. Ek, B., & Svedlund, M. (2015). Registered nurses' experiences of their decision‐

making at an Emergency Medical Dispatch Centre. Journal of clinical

nursing, 24(7-8), 1122-1131.

27. El-Masri, S., & Saddik, B. (2012). An emergency system to improve ambulance

dispatching, ambulance diversion and clinical handover communication—A

proposed model. Journal of medical systems, 36(6), 3917-3923.

28. El-Menyar, A., Asim, M., Latifi, R., & Al-Thani, H. (2015). Research in

Emergency and Critical Care Settings: Debates, Obstacles and Solutions. Science

and engineering ethics, 1-22.

29. Emotiv Epoc – Hardware, 14 Channel EEG (2014). Retrieved from

https://www.emotiv.com/epoc/

30. Engel, A. K., Fries, P., & Singer, W. (2001). Dynamic predictions: oscillations

and synchrony in top–down processing. Nature Reviews Neuroscience, 2(10),

704-716.

31. Engelen L. Is Google Glass Useful in the Operating Room? (2013) Retrieved

from: http://www.linkedin.com/today/post/article/20130815203138-19886490-

google-glas-in-or

Page 68: Assessing Effectiveness of Information Presentation Using

59

32. Esfahani, E. T., & Sundararajan, V. (2012). Classification of primitive shapes

using brain–computer interfaces. Computer-Aided Design, 44(10), 1011-1019.

33. Fischer, M., Lim, Y. Y., Lawrence, E., & Ganguli, L. K. (2008, July).

ReMoteCare: Health monitoring with streaming video. In 2008 7th International

Conference on Mobile Business (pp. 280-286). IEEE.

34. Foote, B. D. (1998, August). Design guidelines for advanced air-to-air helmet-

mounted display systems. In Aerospace/Defense Sensing and Controls (pp. 94-

102). International Society for Optics and Photonics.

35. Forlizzi, J., & Battarbee, K. (2004, August). Understanding experience in

interactive systems. In Proceedings of the 5th conference on Designing interactive

systems: processes, practices, methods, and techniques (pp. 261-268). ACM.

36. Frykberg, E. R., & Tepas, J. J.,3rd. (1988). Terrorist bombings. lessons learned

from Belfast to Beirut. Annals of Surgery, 208(5), 569-576.

37. Ganapathy, S., Anderson, G. J., & Kozintsev, I. V. (2011, March). Empirical

evaluation of augmented information presentation on small form factors-

navigation assistant scenario. In VR Innovation (ISVRI), 2011 IEEE International

Symposium on (pp. 75-80). IEEE.

38. Glauser, W. (2013). Doctors among early adopters of Google Glass. Canadian

Medical Association. Journal, 185(16), 1385.

39. Google developers: Glass Design Guidelines (2015). Retrieved from

https://developers.google.com/glass/design/principles

Page 69: Assessing Effectiveness of Information Presentation Using

60

40. Greenko, J., Mostashari, F., Fine, A., & Layton, M. (2003). Clinical evaluation of

the Emergency Medical Services (EMS) ambulance dispatch-based syndromic

surveillance system, New York City. Journal of Urban Health, 80(1), i50-i56.

41. Guise, J., Meckler, G., O'Brien, K., Curry, M., Engle, P., Dickinson, C., &

Lambert, W. (2015). Patient safety perceptions in pediatric out-of-hospital

emergency care: Children's safety initiative. The Journal of Pediatrics, 167(5),

1143-1148. e1.

42. Hart, S. G. (2006, October). NASA-task load index (NASA-TLX); 20 years later.

In Proceedings of the human factors and ergonomics society annual

meeting (Vol. 50, No. 9, pp. 904-908). Sage Publications.

43. Hasegawa, S., Miyao, M., Matsunuma, S., Fujikake, K., & Omori, M. (2008).

Effects of aging and display contrast on the legibility of characters on mobile

phone screens. iJIM, 2(4), 7-12.

44. Hashimoto, D. A., Phitayakorn, R., Fernandez-del Castillo, C., & Meireles, O.

(2016). A blinded assessment of video quality in wearable technology for

telementoring in open surgery: the Google Glass experience. Surgical

endoscopy, 30(1), 372-378.

45. Hedges, J. R., Feero, S., Moore, B., Shultz, B., & Haver, D. W. (1988). Factors

contributing to paramedic onscene time during evaluation and management of

blunt trauma. The American journal of emergency medicine, 6(5), 443-448.

46. Hopf, J. M., Boelmans, K., Schoenfeld, M. A., Luck, S. J., & Heinze, H. J.

(2004). Attention to features precedes attention to locations in visual search:

Page 70: Assessing Effectiveness of Information Presentation Using

61

Evidence from electromagnetic brain responses in humans. The Journal of

Neuroscience, 24(8), 1822-1832.

47. Injury Prevention and Control: Data and Statistics. Center for disease Control ad

Prevention. Retrieved from www.cdc.gov

48. Kaufmann, C., Rhee, P., & Burris, D. (1999). Telepresence surgery system

enhances medical student surgery training. Studies in health technology and

informatics, 174-178.

49. Keil, A., Debener, S., Gratton, G., Junghöfer, M., Kappenman, E. S., Luck, S. J.,

Luu, P., Miller, G. A. & Yee, C. M. (2014). Committee report: Publication

guidelines and recommendations for studies using electroencephalography and

magnetoencephalography. Psychophysiology, 51(1), 1-21.

50. Kobayashi, K., Sumi, Y., & Mase, K. (1998, April). Information presentation

based on individual user interests. In Knowledge-Based Intelligent Electronic

Systems, 1998. Proceedings KES'98. 1998 Second International Conference

on (Vol. 1, pp. 375-383). IEEE.

51. Lavie, N., & Fockert, J. d. (2006). Frontal control of attentional capture in visual

search. Visual Cognition, 14(4-8), 863-876.

52. Lee, S. (2011). The role of preparedness in ambulance dispatching. Journal of the

Operational Research Society, 62(10), 1888-1897.

53. LiKamWa, R., Wang, Z., Carroll, A., Lin, F. X., & Zhong, L. (2014, June).

Draining our glass: An energy and heat characterization of google glass. In

Proceedings of 5th Asia-Pacific Workshop on Systems (p. 10). ACM.

Page 71: Assessing Effectiveness of Information Presentation Using

62

54. McNaney, R., Vines, J., Roggen, D., Balaam, M., Zhang, P., Poliakov, I., &

Olivier, P. (2014, April). Exploring the acceptability of google glass as an

everyday assistive device for people with parkinson's. In Proceedings of the 32nd

annual ACM conference on Human factors in computing systems (pp. 2551-

2554). ACM.

55. Mekni, M., & Lemieux, A. (2014, April). Augmented reality: Applications,

challenges and future trends. In Applied Computational Science—Proceedings of

the 13th International Conference on Applied Computer and Applied

Computational Science (ACACOS ‘14) Kuala Lumpur, Malaysia (pp. 23-25).

56. Melton, J. T., Jain, S., Kendrick, B., & Deo, S. D. (2007). Helicopter emergency

ambulance service (HEAS) transfer: An analysis of trauma patient case-mix,

injury severity and outcome. Annals of the Royal College of Surgeons of England,

89(5), 513-516.

57. Melzer, J. E. (2000). Head-mounted displays. In Digital Avionics Handbook,

Second Edition-2 Volume Set. CRC Press.

58. Mitrasinovic, S., Camacho, E., Trivedi, N., Logan, J., Campbell, C., Zilinyi, R.,

Lieber, B., Bruce, E., Taylor, B., Martineau, D. and Dumont, E.L., 2015. Clinical

and surgical applications of smart glasses. Technology and Health Care, 23(4),

pp.381-401.

59. Montgomery, K., Mundt, C., Thonier, G., Tellier, A., Udoh, U., Barker, V., ... &

Swain, J. (2004, September). Lifeguard-a personal physiological monitor for

extreme environments. In Engineering in Medicine and Biology Society, 2004.

Page 72: Assessing Effectiveness of Information Presentation Using

63

IEMBS'04. 26th Annual International Conference of the IEEE (Vol. 1, pp. 2192-

2195). IEEE.

60. Muensterer, O. J., Lacher, M., Zoeller, C., Bronstein, M., & Kübler, J. (2014).

Google Glass in pediatric surgery: an exploratory study. International journal of

surgery, 12(4), 281-289.

61. NASA Ames Research Center: Manual of NASA Task Load Index (TLX), v 1.0.

(1987). Retrieved from http://human-factors.arc.nasa.gov/groups/TLX/index.html

62. Navab, N., Traub, J., Sielhorst, T., Feuerstein, M., & Bichlmeier, C. (2007).

Action-and workflow-driven augmented reality for computer-aided medical

procedures. IEEE Computer Graphics and Applications, 27(5), 10-14.

63. Navarro, C., & Sikorski, S. (1999). Datalink communication in flight deck

operations: A synthesis of recent studies. The International Journal of Aviation

Psychology, 9(4), 361-376.

64. Nayak, K., Kotak, D., & Narula, H. (2014). Google glass: Taking wearable

technology to the next level. International Journal of Current Engineering and

Technology, 4(4).

65. Nobre, A. C., Coull, J. T., Walsh, V., & Frith, C. D. (2003). Brain activations

during visual search: contributions of search efficiency versus feature binding.

Neuroimage, 18(1), 91-103.

66. Pantelopoulos, A., & Bourbakis, N. G. (2010). A survey on wearable sensor-

based systems for health monitoring and prognosis. IEEE Transactions on

Systems, Man, and Cybernetics, Part C (Applications and Reviews), 40(1), 1-12.

Page 73: Assessing Effectiveness of Information Presentation Using

64

67. Patterson, R., Winterbottom, M. D., & Pierce, B. J. (2006). Perceptual issues in

the use of head-mounted visual displays. Human Factors, 48(3), 555-573.

68. Pham, T. D., & Tran, D. (2012, November). Emotion recognition using the

emotiv epoc device. In International Conference on Neural Information

Processing (pp. 394-399). Springer Berlin Heidelberg.

69. Repede, J. F., & Bernardo, J. J. (1994). Developing and validating a decision

support system for locating emergency medical vehicles in Louisville,

Kentucky. European journal of operational research, 75(3), 567-581.

70. Riahi, R. R., & Cohen, P. R. (2012). Laptop-induced erythema ab igne: Report

and review of literature. Dermatology Online Journal, 18(6).

71. Roberts, D. W., Strohbehn, J. W., Hatch, J. F., Murray, W., & Kettenberger, H.

(1986). A frameless stereotaxic integration of computerized tomographic imaging

and the operating microscope. Journal of neurosurgery, 65(4), 545-549

72. Sarter, M., Givens, B., & Bruno, J. P. (2001). The cognitive neuroscience of

sustained attention: where top-down meets bottom-up. Brain research

reviews, 35(2), 146-160.

73. Schaik, P. V., & Ling, J. (2001). The effects of frame layout and differential

background contrast on visual search performance in web pages. Interacting with

Computers, 13, 513-525.

74. Schmidt, G. W., & Osborn, D. B. (1995, May). Head-mounted display system for

surgical visualization. In Photonics West'95 (pp. 345-350). International Society

for Optics and Photonics.

Page 74: Assessing Effectiveness of Information Presentation Using

65

75. Seymour, N. E., Gallagher, A. G., Roman, S. A., O’Brien, M. K., Bansal, V. K.,

Andersen, D. K., & Satava, R. M. (2002). Virtual reality training improves

operating room performance: results of a randomized, double-blinded study.

Annals of surgery, 236(4), 458-464.

76. Shen, S., & Shaw, M. (2004). Managing coordination in emergency response

systems with information technologies. AMCIS 2004 Proceedings, 252.

77. Sinkin, J. C., Rahman, O. F., & Nahabedian, M. Y. (2016). Google Glass in the

Operating Room: The Plastic Surgeon’s Perspective. Plastic and reconstructive

surgery, 138(1), 298-302.

78. Slovis, C. M., Carruth, T. B., Seitz, W. J., Thomas, C. M., & Elsea, W. R. (1985).

A priority dispatch system for emergency medical services. Annals of emergency

medicine, 14(11), 1055-1060.

79. Sonntag, A. (2015). Search costs and adaptive consumers: Short time delays do

not affect choice quality. Journal of Economic Behavior & Organization, 113, 64-

79.

80. Speier, C. (2006). The influence of information presentation formats on complex

task decision-making performance. International Journal of Human-Computer

Studies, 64(11), 1115-1131.

81. Spitzer, C. R., & Spitzer, C. (Eds.). (2000). Digital Avionics Handbook. CRC

press.

82. Steriade, M. (2005). Cellular substrates of brain rhythms.

Electroencephalography: Basic principles, clinical applications, and related

fields, 5, 31-83.

Page 75: Assessing Effectiveness of Information Presentation Using

66

83. Szalavári, Z., Eckstein, E., & Gervautz, M. (1998, November). Collaborative

gaming in augmented reality. In Proceedings of the ACM symposium on Virtual

reality software and technology (pp. 195-204). ACM.

84. Tatler, B. W., Hayhoe, M. M., Land, M. F., & Ballard, D. H. (2011). Eye

guidance in natural vision: Reinterpreting salience. Journal of vision, 11(5), 5-5.

85. Teplan, M. (2002). Fundamentals of EEG measurement. Measurement science

review, 2(2), 1-11.

86. Wac, K., Konstantas, D., Van Halteren, A., Bults, R., Widya, I., Dokovsky, N.,

Koprinkov, G., Jones, V. & Herzog, R. (2004). Mobile Patient Monitoring: The

MobiHealth System. Journal on Information Technology in Healthcare, 2(5),

pp.365-373.

87. Wall, D., Ray, W., Pathak, R. D., & Lin, S. M. (2014). A Google Glass

application to support shoppers with dietary management of diabetes. Journal of

diabetes science and technology, 8(6), 1245-1246.

88. White, R. D., Hankins, D. G., & Bugliosi, T. F. (1998). Seven years’ experience

with early defibrillation by police and paramedics in an emergency medical

services system. Resuscitation, 39(3), 145-151.

89. Wickens, C. D., Hellenberg, J., & Xu, X. (2002). Pilot maneuver choice and

workload in free flight. Human Factors: The Journal of the Human Factors and

Ergonomics Society, 44(2), 171-188.

90. Yilmaz, T., Foster, R., & Hao, Y. (2010). Detecting vital signs with wearable

wireless sensors. Sensors, 10(12), 10837-10862.

Page 76: Assessing Effectiveness of Information Presentation Using

67

91. Ziefle, M. (2010). Information presentation in small screen devices: The trade-off

between visual density and menu foresight. Applied ergonomics,41(6), 719-730.

Page 77: Assessing Effectiveness of Information Presentation Using

68

Appendix I - Questionnaire

Perceived Usefulness in Emergency Response Scenario

1. Using Heads-up display would improve my patient monitoring and decision

making performance.

|_____________|_____________|_____________|_____________|_____________|

Strongly Disagree

Disagree Neutral Agree Strongly Agree

2. I would find Heads-up display device useful in trauma pre-hospital care.

|_____________|_____________|_____________|_____________|_____________|

Strongly Disagree

Disagree Neutral Agree Strongly Agree

3. Specialized instructions concerning Google Glass™ were available to me.

|_____________|_____________|_____________|_____________|_____________|

Strongly Disagree

Disagree Neutral Agree Strongly Agree

4. I like/dislike the idea of using Google Glass™ for trauma care.

|_____________|_____________|_____________|_____________|_____________|

Strongly Disagree

Disagree Neutral Agree Strongly Agree

Page 78: Assessing Effectiveness of Information Presentation Using

69

User Experience

1. My interaction with Google Glass™ were clear and understandable.

|_____________|_____________|_____________|_____________|_____________|

Strongly Disagree

Disagree Neutral Agree Strongly Agree

2. I would find Google Glass™ to be flexible to interact with.

|_____________|_____________|_____________|_____________|_____________|

Strongly Disagree

Disagree Neutral Agree Strongly Agree

3. It is easy to navigate through this application.

|_____________|_____________|_____________|_____________|_____________|

Strongly Disagree

Disagree Neutral Agree Strongly Agree

4. The application loads too slowly.

|_____________|_____________|_____________|_____________|_____________|

Strongly Disagree

Disagree Neutral Agree Strongly Agree

5. The content in the application is appropriate.

|_____________|_____________|_____________|_____________|_____________|

Strongly Disagree

Disagree Neutral Agree Strongly Agree

Page 79: Assessing Effectiveness of Information Presentation Using

70

6. It is easy to use this application.

|_____________|_____________|_____________|_____________|_____________|

Strongly Disagree

Disagree Neutral Agree Strongly Agree

7. I learnt using this application quickly.

|_____________|_____________|_____________|_____________|_____________|

Strongly Disagree

Disagree Neutral Agree Strongly Agree

8. I can effectively complete my tasks using this application.

|_____________|_____________|_____________|_____________|_____________|

Strongly Disagree

Disagree Neutral Agree Strongly Agree

9. The organization of information on the system screen is clear.

|_____________|_____________|_____________|_____________|_____________|

Strongly Disagree

Disagree Neutral Agree Strongly Agree

10. Screens were well designed.

|_____________|_____________|_____________|_____________|_____________|

Strongly Disagree

Disagree Neutral Agree Strongly Agree

Page 80: Assessing Effectiveness of Information Presentation Using

71

11. The symbols or acronyms were used appropriately.

|_____________|_____________|_____________|_____________|_____________|

Strongly Disagree

Disagree Neutral Agree Strongly Agree

Page 81: Assessing Effectiveness of Information Presentation Using

72

Appendix II – NASA TLX Mental Workload Rankings

For each of the pairs listed below, circle the scale title that represents the more important

contributor to workload in the display.

Mental Demand or Physical Demand

Mental Demand or Temporal Demand

Mental Demand or Performance

Mental Demand or Effort

Mental Demand or Frustration

Physical Demand or Temporal Demand

Physical Demand or Performance

Physical Demand or Effort

Physical Demand or Frustration

Temporal Demand or Performance

Temporal Demand or Frustration

Temporal Demand or Effort

Performance or Frustration

Performance or Effort

Frustration or Effort

Page 82: Assessing Effectiveness of Information Presentation Using

73

Physical Demand How physically demanding was the task?

|__|__| _ |__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|

Very low Very High

Temporal Demand How hurried or rushed was the pace of the task?

|__|__| _ |__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|

Very low Very High

Performance How successful were you to accomplish what you were asked

to do?

|__|__| _ |__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|

Perfect Failure

Effort How hard did you have to work to accomplish your level of

performance?

|__|__| _ |__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|

Very low Very High

Frustration How insecure, discouraged, irritated, stressed, and annoyed

were you?

|__|__| _ |__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|__|

Very low Very High

Page 83: Assessing Effectiveness of Information Presentation Using

74

Appendix III - Google Glass™ Patient information streaming – Scenarios and

Triage indices

Response driven cases (The surgeon has to call the emergency team for a change in decision or

to keep any other team ready for the arriving case):

Scenario 1

• A patient is involved in an explosion while working in an oil factory, he sustains 80% 3rd

degree burns.

Scenario 2

• A patient was treated for 5% second degree burns on both hands in a local hospital.

They want to transfer to a specialty hospital quickly, so the referring physician suggests

air ambulance.

Scenario 3

• A 60-year-old woman drives her car into a pole on the edge of the road and sustains

severe facial injuries. The first responders reach the spot and send a description of the

injury that says possible fracture in the eye socket, dental injury, bruising in the lower

jaw area, and a lot of bleeding from the mouth.

Non-response cases (The surgeon has to just monitor the patient’s vitals until they reach the

hospital)

Scenario 1

• A 50-year-old man suffers from a gunshot wound and to the thigh and is in the ground

ambulance on the way to the hospital. Doctor has to check vital signs for possible

problems that may occur for the situation.

Page 84: Assessing Effectiveness of Information Presentation Using

75

Scenario 2

• A 35-year-old man is involved in a head-on collision and is transferred to the hospital by

ground ambulance. He has a deep laceration of the scalp and skull fracture, and

crepitance over his right lateral chest wall.

Scenario 3

• A 40 year old man working in a semi-automated car assembly factory suffers bilateral

upper extremity fractures when a steel assembly unit falls loose from its’ fittings.

Page 85: Assessing Effectiveness of Information Presentation Using

76

Appendix IV – Triage Indices

Triage indices:

Normal Intermediate Extreme

Heart Rate 50-100 40-50 100-140

<40 >140

Temperature (F) 97-100 100-102 95-97

>102 <95

Respiratory Rate 12-16 16-20 8-12

>25 <8

SpO2 90-100 80-90 <80

Blood Pressure Systolic

120 120-160 >160 or <100