a gazeenabled graph visualization to improve graph ... okoe, sayeed safayet alam, & radu jianu /...

Click here to load reader

Post on 27-Apr-2018

222 views

Category:

Documents

1 download

Embed Size (px)

TRANSCRIPT

  • Eurographics Conference on Visualization (EuroVis) 2014H. Carr, P. Rheingans, and H. Schumann(Guest Editors)

    Volume 33 (2014), Number 3

    A Gaze-enabled Graph Visualization to Improve GraphReading Tasks

    Mershack Okoe, Sayeed Safayet Alam, and Radu Jianu

    Florida International University, USA

    AbstractPerforming typical network tasks such as node scanning and path tracing can be difficult in large and densegraphs. To alleviate this problem we use eye-tracking as an interactive input to detect tasks that users intend toperform and then produce unobtrusive visual changes that support these tasks. First, we introduce a novel foveabased filtering that dims out edges with endpoints far removed from a users view focus. Second, we highlightedges that are being traced at any given moment or have been the focus of recent attention. Third, we trackrecently viewed nodes and increase the saliency of their neighborhoods. All visual responses are unobtrusive andeasily ignored to avoid unintentional distraction and to account for the imprecise and low-resolution nature of eye-tracking. We also introduce a novel gaze-correction approach that relies on knowledge about the network layoutto reduce eye-tracking error. Finally, we present results from a controlled user study showing that our methodsled to a statistically significant accuracy improvement in one of two network tasks and that our gaze-correctionalgorithm enables more accurate eye-tracking interaction.

    Keywords: Eye tracking, gaze contingent graph visualization.

    1. Introduction

    Network analysis plays an important part in domains such asneuroscience [BS09], genomics and proteomics [CCNS08],software engineering [GN00], or social sciences [BMBL09].Interaction is instrumental in allowing users to weed throughthe scale, complexity, and clutter inherent to visualizationsof real-life networks. Here we explore the use of eye trackingas an interactive input to detect users intentions and supportthem by slight changes in the visualization. The use of eyetracking as an input has been explored in the human com-puter interaction (HCI) community [Duc02], but there arefew results in the visualization domain.

    Specifically, we introduce three types of interactions.First, we reduce clutter by using a novel fovea-based filter-ing that dims edges that pass through the users view focusbut have their endpoints far outside of the users fovea. Sec-ond, we increase the saliency of edges that users are view-ing or have recently viewed. Third, we keep track of nodesthat were recently viewed and increase the salience of theirneighborhood. All visual responses are gradual, incrementalrather than binary, and visually subtle.

    Thus, by design, our interactions are gaze-contingent [Duc02]. We use gaze coordinates to inferusers task intentions and to visually support these tasks asunobtrusively as possible, so as to minimize distraction. Thisapproach also relates to attentive interfaces [Duc02, Sel04]and multimodal interfaces [Ovi03] but contrasts with earlyHCI efforts to use eye-tracking in ways analogue to manualpointing and clicking. Merely connecting eye-tracking inputto otherwise conventional network interactions is limited byparticularities of eye-movements and eye-tracking technol-ogy. Specifically, as noted by [ZMI99], the eyes are not acontrol organ, eye-tracking input is generally low resolutionand inaccurate, and the absence of a trigger command isdifficult to compensate [Jac90].

    We also contribute a gaze-correction algorithm that usesknowledge of the visualization layout to reduce eye-trackingerror. Insufficient calibration sometimes leads to screen re-gions in which gaze input is offset from the users real view-ing point. Our algorithm relies on the known visual position-ing of nodes on the screen to detect nodes that are likely tobe viewed.

    We evaluated our gaze-enabled network visualization in a

    c 2014 The Author(s)Computer Graphics Forum c 2014 The Eurographics Association and JohnWiley & Sons Ltd. Published by John Wiley & Sons Ltd.

    DOI: 10.1111/cgf.12381

  • Mershack Okoe, Sayeed Safayet Alam, & Radu Jianu / A Gaze-enabled Graph Visualization to Improve Graph Reading Tasks

    within-subject user study with twelve participants. First, weasked participants to perform two types of tasks: (i) iden-tify whether there is a direct connection between two nodes;and (ii) identify the shortest path between two nodes. In athird task designed to evaluate our gaze correction algorithm,users selected as many nodes as possible in a given time bylooking at them. Our results showed a 30% improvement inthe direct connection task (p = 0.02), a 25% improvement inthe node selection task (p = 0.01), and were not significantin the path task.

    Given the unavoidable connection between eyes and datavisualization, the fact that peoples gazes are linked to tasksthey are performing [YR67, SCC13], and that eye-trackingis on its way to becoming a component of regular work sta-tions [Duc07, JK03], we hypothesize that visualization re-search can benefit from exploring the use of eye-tracking asan input channel.

    2. Related Work

    The fovea, a small area in the center of the retina, is responsi-ble for our high resolution vision. The larger part of our fieldof view (i.e. parafoveal and peripheral region) is low resolu-tion. The illusion of full high definition vision is created byan unconscious scanning process: the fovea performs quicktranslations, called saccades between short moments of fo-cus, called fixations. Eye-tracking technology allows us tolocate users points of gaze [WM87, Jac91].

    Most often, gaze tracing is used for data collection in of-fline, post hoc analyses of human visual perception [Duc07].In data visualization, Huang et al. used eye tracking to in-vestigate the cognitive processes involved in reading graphvisualization [HEH08], Pohl et al. used it to understand howuser performance is affected by network layout [PSD09],Burch et al. investigated visual exploration behavior and tasksolution strategies for hierarchical tree layouts [BAA13,BKH11], and Tory et al. used eye tracking to analyze the ef-fectiveness of visualization designs that combine 2D and 3Dviews [TAK05]. In a different approach, Andrienko et al.identified visual analytics methods applicable to eye track-ing data analysis [AABW12], while Steichen et al. notes userand task characteristics that can be inferred from eye track-ing data [SCC13]. Unlike these works, we use eye trackingdata as an input source to change visualizations in real time.

    The appeal of the eyes speed led HCI researchers to ex-plore gaze as an actuatory input in ways analogue to man-ual input. This approach has met with limited success dueto several reasons. First, while very fast, gaze-input comeswith disadvantages such as low accuracy, jitter, drift, off-sets, and calibration needs [Duc07, JK03, KPW07]. Second,finding a gaze equivalent of a trigger command is not trivialand leads to the Midas touch phenomenon - the inabilityof the interface to reliably distinguish between looking andcontrolling [Jac91]. Ultimately, the duration of a fixation, or

    dwell time, has been established as the most effective wayto trigger commands [WM87, Jac91]. However, low dwellthresholds amplify the Midas touch problem by triggeringcommands inadvertently, while high dwell thresholds offsetthe speed advantage of gaze input.

    The current consensus is that eyes are not suited for in-terface control [Jac91, JK03, ZMI99, Zha03]. Instead, Ja-cob proposed that interfaces should use gaze as an indi-cator of intention and should react with gradual, unobtru-sive changes [Jac91, JK03], a view formalized by the con-cept of attentive interfaces [ADS05, Ver02, VSCM06, VS08,HMR05, RHN03]. The research described here alignswith this paradigm and also draws inspiration from workin gaze-contingent rendering [OHM04, D07, ODH02],where scenes are drawn in high resolution only in foveatedscreen areas to reduce computational costs.

    In the visualization domain the use of eye tracking as aninteractive input is minimal. Streit et al. [SLMS09] use gaze-information to enlarge visualization regions of interest and tonavigate or manipulate 3D scenes. This work fits in the HCIcontrol paradigm. Our work differs through the adoption ofthe attentive interface approach by which we produce un-obtrusive visual responses that minimize distraction and arecomplementary to traditional manual control. A further con-tribution over previous work is the gaze correction methoddescribed in the following section.

    3. Implementation

    Our implementation focuses on two issues: improving gazeaccuracy and providing interactive visual responses. The in-teractive responses are: (i) a novel fovea based filtering thatdims out edges with endpoints far removed from a usersview focus; (ii) highlighting edges that are being traced atany given moment or have been the focus of recent atten-tion; (iii) tracking recently viewed nodes and increasing thesaliency of their neighborhoods. We detail these techniquesin the following sections.

    3.1. Gaze-correction

    Due to calibration limitations, gazes reported by eye-tracking are sometimes offset from real gaze coordinates. Wealleviate this problem by leveraging the known network lay-out. We use the eye-tracker API to compute fixations fromindividual gaze samples. We match subsequent long fixa-tions (200-300ms) to proximal nodes that have a relativepairwise positioning similar to that of the fixations. We thenassume these nodes were likely the target of the users at-tention and compute offsets between them and the fixations.We aggregate these offsets over time, gradually constructingand adjusting an offset map over the screen space. This offsetmap is then used to correct the coordinates of all incominggaze-samples (Fig. 1).

    Th

View more