typing on split keyboards with peripheral on split  · typing on split keyboards with

Download Typing on Split Keyboards with Peripheral on Split   · Typing on Split Keyboards with

Post on 21-Jun-2019

215 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

  • Typing on Split Keyboards with Peripheral Vision Yiqin Lu12, Chun Yu123, Shuyi Fan1, Xiaojun Bi4, Yuanchun Shi123

    1Department of Computer Science and Technology, Tsinghua University, Beijing, China2Key Laboratory of Pervasive Computing, Ministry of Education, China

    3Global Innovation eXchange Institute, Tsinghua University, Beijing, China 4Department of Computer Science, Stony Brook University, Stony Brook, NY, United States

    {lu-yq16,fansy15}@mails.tsinghua.edu.cn, {chunyu,shiyc}@tsinghua.edu.cn, xiaojun@cs.stonybrook.edu

    ABSTRACT Split keyboards are widely used on hand-held touchscreen devices (e.g., tablets). However, typing on a split keyboard of-ten requires eye movement and attention switching between two halves of the keyboard, which slows users down and increases fatigue. We explore peripheral typing, a superior typing mode in which a user focuses her visual attention on the output text and keeps the split keyboard in peripheral vision. Our investigation showed that peripheral typing re-duced attention switching, enhanced user experience and increased overall performance (27 WPM, 28% faster) over the typical eyes-on typing mode. This typing mode can be well supported by accounting the typing behavior in statisti-cal decoding. Based on our study results, we have designed GlanceType, a text entry system that supported both periph-eral and eyes-on typing modes for real typing scenario. Our evaluation showed that peripheral typing not only well co-existed with the existing eyes-on typing, but also substan-tially improved the text entry performance. Overall, periph-eral typing is a promising typing mode and supporting it would signifcantly improve the text entry performance on a split keyboard.

    CCS CONCEPTS Human-centered computing Text input; Touch screens; User studies;

    KEYWORDS Split keyboard; peripheral vision; gaze.

    denotes the corresponding author.

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for proft or commercial advantage and that copies bear this notice and the full citation on the frst page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specifc permission and/or a fee. Request permissions from permissions@acm.org. CHI 2019, May 49, 2019, Glasgow, Scotland UK 2019 Association for Computing Machinery. ACM ISBN 978-1-4503-5970-2/19/05. . . $15.00 https://doi.org/10.1145/3290605.3300430

    ACM Reference Format: Yiqin Lu, Chun Yu, Shuyi Fan, Xiaojun Bi, Yuanchun Shi. 2019. Typing on Split Keyboards with Peripheral Vision. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), May 49, 2019, Glasgow, Scotland UK. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3290605.3300430

    1 INTRODUCTION Split keyboards are specially designed to account for the form factor of tablet computers (e.g., Samsung Galaxy Tab, Apple iPad, and Microsoft Surface). It split the conventional QWERTY keyboard into two halves and positions them on the left and right of the touchscreen respectively. Users hold the device with two hands and type with both thumbs. Com-pared with regular keyboards, this allows for a steadier grip during walking, sitting or lying, saves precious screen real estate [17], and potentially improves typing performance due to two-thumb use [3, 12, 21, 24]. Despite these advantages, a split keyboard sufers from a weakness in the conventional eyes-on typing mode: users have to frequently switch visual focus back and forth between two keyboard halves. This not only causes eye/neck fatigue but also limits the typing speed [5, 16].

    Figure 1: A user types with the split keyboard on a tablet computer. He needs to see three locations when typing (marked in blue circles): the text feedback (URL bar in this fgure), and the left and right keyboard halves. When he is focusing on the feld of text feedback, two split keyboard halves are located in his peripheral vision.

    CHI 2019 Paper CHI 2019, May 49, 2019, Glasgow, Scotland, UK

    Paper 200 Page 1

    https://doi.org/10.1145/3290605.3300430https://doi.org/10.1145/3290605.3300430mailto:permissions@acm.orgmailto:xiaojun@cs.stonybrook.edu
  • In this paper, we investigate typing with peripheral vi-sion (peripheral typing), in which a user focuses her visual attention on the output text and keeps the keyboard in her peripheral vision (Figure 1). This mode does not require the user to switch visual attention frequently between diferent parts of the interface. Peripheral typing assumes users are familiar with two-thumb typing (e.g., on a smartphone) and can type with spatial and muscle memory when the keyboard is located in peripheral vision. Unlike touch typing [18] and invisible keyboard [39] that the keyboard feedback vanishes in typing process, a user still gains partial visual feedback of the keyboard when typing with peripheral vision, which contrasts and complements recent progress of text entry re-search. We believe interaction with peripheral vision is a promising paradigm where more and more interaction tasks [10, 27, 34] are competing for users visual attention.

    We follow an iterative approach to validate the feasibility of peripheral typing on a split keyboard. We frst conduct a Wizard-of-Oz experiment to collect touch data and com-pare it with two baseline modes: eyes-on and no-keyboard. Results show peripheral vision can indeed provide a certain degree of visual feedback for controlling thumb tapping. The touch accuracy of peripheral typing is lower than eyes-on but signifcantly higher than no-keyboard setting. We then combine the derived touch model and a unigram word-level language model to develop an input decoder. Both simula-tion and real-user studies show that the decoding is efective. When entering in-vocabulary words, peripheral typing (27 WPM) improves the input speed by 28% over the typical eyes-on typing (21 WPM). It also reduces the attention switching and increases user satisfaction.

    To support real use that involves typing out-of-vocabulary words (e.g., uncommon names or addresses), we further pro-pose a new design of text entry interface that features two candidate lists: one beneath the output text associated with peripheral typing and the other above the keyboard halves associated with the default split keyboard output. With this design, users can spontaneously choose between peripheral typing and eyes-on typing with no explicit mode switching operation. This design allows peripheral typing to be eas-ily incorporated with the current typing approach (e.g., the commercial keyboard) with little confict. We also evaluate our prototype - GlanceType in a real corpus with out-of-vocabulary words and non-alphabetic characters. Results show that GlanceType increase the input speed by 7.0% com-pared with the commercial keyboard. Our major contributions are two-folded. First, we have

    studied an unexplored typing mode - peripheral typing. Our investigation showed that it was not only feasible but also promising and even superior over the typical eyes-on typ-ing. The reasons are 1) users can to a degree control thumb tapping with peripheral vision; and 2) accounting for the

    peripheral typing behavior in statistical decoding is efective in resolving the noise generated in this typing mode. Sec-ond, we have designed a two-candidate-region mechanism supporting spontaneously choosing between eyes-on and peripheral typing with no need for an explicit mode switch. We have incorporated this mechanism into a text entry sys-tem called GlanceType. A user study showed that GlanceType improved the performance over a commercial split keyboard.

    2 RELATED WORK Our work is related to text input, split keyboard and periph-eral vision. To our knowledge, no work has been done to investigate the efect of peripheral vision in text input task.

    Typing on QWERTY Layout with Visual Constrains Touch typing, that the user performs ten-fnger typing with-out looking at the physical keyboard, is a well-known solu-tion to input text efciently in desktop PC scenario. After long-term training, touch typing speeds of everyday typ-ists can achieve 34-79 WPM [8]. The typing performance of blind people [22] or able-bodied people in blind environ-ments (e.g., virtual reality) [35] has also been researched. As the QWERTY keyboard layout has been implemented in most touchscreen devices like smartphones, smart tabletops, tablets, etc., many works have tried to bring the touch typing from the physical keyboard to the soft keyboard. Typing with ten fngers on touchscreens has been exten-

    sively studied. When typing on a tabletop touchscreen sur-face [9], touch points without the visual keyboard were rela-tively noisier than that with the visual keyboard. Splitting and rotating the whole keyboard on the back of the tablet with the fnger-to-key assignment remaining enables ten-fnger typing when gripping the tablet. Both the physical version (RearType [26]) and the virtual version (Sandwich Keyboard [25]) of the keyboard have been proposed. In order to type on a small keyboard (e.g., smartphones),

    thumb-based typing postures, including one thumb and two thumbs, are generally used. Previous works have found it possible for keyboard users to use thumbs to type eyes-freely on mini physical keyboards [6] or soft keyboards [18, 33]. Users are also able to type on an invisible keyboard [39] with the speed close to that on a visible keyboard. That is, the users possess the ability to seek keys with thumbs in limited visual condition when they were well trained on the keyboard layout (so-called spatial memory and muscle memory).

Recommended

View more >