gyrowand: an approach to imu-based raycasting …...real world. at the core of this new generation...

7
90 March/April 2016 Published by the IEEE Computer Society 0272-1716/16/$33.00 © 2016 IEEE Spatial Interfaces Editors: Frank Steinicke and Wolfgang Stuerzlinger GyroWand: An Approach to IMU-Based Raycasting for Augmented Reality Juan David Hincapié-Ramos University of Manitoba Kasim Özacar Tohoku University Pourang P. Irani University of Manitoba Yoshifumi Kitamura Tohoku University O ptical see-through head-mounted displays (OST-HMDs), such as Epson’s Moverio (www.epson.com/moverio) and Micro- soft’s Hololens (www.microsoft.com/microsoft -hololens), enable augmented reality (AR) applica- tions that display virtual objects overlaid on the real world. At the core of this new generation of devices are low-cost tracking technologies using techniques such as marker-based tracking, 1 simul- taneous localization and mapping (SLAM), 2 and dead reckoning based on inertial measurement units (IMUs), 3,4 which allow us to interpret users’ motion in the real world in relation to the virtual content for the purposes of navigation and inter- action. These tracking technologies enable AR ap- plications in mobile settings at an affordable price. The advantages of pervasive tracking come at the cost of limiting interaction possibilities, however. Off-the-shelf HMDs still depend on peripherals such as handheld touch pads and other peripher- als. Mid-air gestures and other natural user inter- faces (NUIs) offer an alternative to peripherals but are limited to interacting with content relatively close to the user (direct manipulation) and are prone to tracking errors and arm fatigue. 5 Raycasting, an interaction technique widely explored in traditional virtual reality (VR), is an- other alternative for interaction in AR. 6 Raycasting generally requires absolute tracking of a handheld controller (known as a wand), but the limited tracking capabilities of novel AR devices make it difficult to track a controller’s location. In this article, we introduce a raycasting tech- nique for AR HMDs. Our approach is to devise a way to provide raycasting based only on the ori- entation of a handheld controller. In principle, the rotation of a handheld controller cannot be used directly to determine the direction of the ray as a result of intrinsic problems with IMUs, such as magnetic interference and sensor drift. Moreover, a user’s movement in space creates a situation in which the virtual content and ray direction are of- ten not aligned with the HMD’s field of view (FOV). To address these challenges we introduce Gy- roWand, a raycasting technique for AR HMDs using IMU rotational data from a handheld con- troller. GyroWand’s fundamental design differs from traditional raycasting in four ways: interprets IMU rotational data using a state machine, which includes anchor, active, out-of- sight, and disambiguation states; compensates for drift and interference by tak- ing the orientation of the handheld controller as the initial rotation (zero) when starting an interaction; initiates raycasting from any spatial coordinate (such as a chin or shoulder); and provides three disambiguation methods: Lock and Drag, Lock and Twist, and AutoTwist.

Upload: others

Post on 25-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: GyroWand: An Approach to IMU-Based Raycasting …...real world. At the core of this new generation of devices are low-cost tracking technologies using techniques such as marker-based

90 March/April 2016 Published by the IEEE Computer Society 0272-1716/16/$33.00 © 2016 IEEE

Spatial Interfaces Editors: Frank Steinicke and Wolfgang Stuerzlinger

GyroWand: An Approach to IMU-Based Raycasting for Augmented RealityJuan David Hincapié-RamosUniversity of Manitoba

Kasim ÖzacarTohoku University

Pourang P. IraniUniversity of Manitoba

Yoshifumi KitamuraTohoku University

Optical see-through head-mounted displays (OST-HMDs), such as Epson’s Moverio (www.epson.com/moverio) and Micro-

soft’s Hololens (www.microsoft.com/microsoft-hololens), enable augmented reality (AR) applica-tions that display virtual objects overlaid on the real world. At the core of this new generation of devices are low-cost tracking technologies using techniques such as marker-based tracking,1 simul-taneous localization and mapping (SLAM),2 and dead reckoning based on inertial measurement units (IMUs),3,4 which allow us to interpret users’ motion in the real world in relation to the virtual content for the purposes of navigation and inter-action. These tracking technologies enable AR ap-plications in mobile settings at an affordable price.

The advantages of pervasive tracking come at the cost of limiting interaction possibilities, however. Off-the-shelf HMDs still depend on peripherals such as handheld touch pads and other peripher-als. Mid-air gestures and other natural user inter-faces (NUIs) offer an alternative to peripherals but are limited to interacting with content relatively close to the user (direct manipulation) and are prone to tracking errors and arm fatigue.5

Raycasting, an interaction technique widely explored in traditional virtual reality (VR), is an-other alternative for interaction in AR.6 Raycasting generally requires absolute tracking of a handheld controller (known as a wand), but the limited

tracking capabilities of novel AR devices make it dif� cult to track a controller’s location.

In this article, we introduce a raycasting tech-nique for AR HMDs. Our approach is to devise a way to provide raycasting based only on the ori-entation of a handheld controller. In principle, the rotation of a handheld controller cannot be used directly to determine the direction of the ray as a result of intrinsic problems with IMUs, such as magnetic interference and sensor drift. Moreover, a user’s movement in space creates a situation in which the virtual content and ray direction are of-ten not aligned with the HMD’s � eld of view (FOV).

To address these challenges we introduce Gy-roWand, a raycasting technique for AR HMDs using IMU rotational data from a handheld con-troller. GyroWand’s fundamental design differs from traditional raycasting in four ways:

■ interprets IMU rotational data using a state machine, which includes anchor, active, out-of-sight, and disambiguation states;

■ compensates for drift and interference by tak-ing the orientation of the handheld controller as the initial rotation (zero) when starting an interaction;

■ initiates raycasting from any spatial coordinate (such as a chin or shoulder); and

■ provides three disambiguation methods: Lock and Drag, Lock and Twist, and AutoTwist.

g2spa.indd 90 2/22/16 10:37 PM

Page 2: GyroWand: An Approach to IMU-Based Raycasting …...real world. At the core of this new generation of devices are low-cost tracking technologies using techniques such as marker-based

IEEE Computer Graphics and Applications 91

This article is a condensed version of a paper presented at the 2015 ACM Symposium on Spa-tial User Interaction.7 Here we focus on our design choices and their implications and refer the reader to the full paper for the experimental results.

IMU for RaycastingTo provide raycasting for AR HMDs, we leverage the sensors located on the controller piece nor-mally attached to HMDs. Such a controller con-tains a nine-axis IMU that captures the controller orientation. Our approach is to use this rotational data to control the pointing direction of the ray. This approach can be extended to HMDs without a handheld controller (such as a Microsoft Holo-Lens) using external objects to capture rotational data, such as Bluetooth connected wands, rings, or even smartphones.

The rotational data gathered from IMUs has sev-eral known problems, however, including noise, drift and/or magnetic interference, and axis map-ping. For raycasting, sensor noise or interference means that the direction of the ray will exhibit jitter or trembling, even if there is none. Sensor drift means that the same controller orientation results in different ray directions as drift accumu-lates. Finally, the mismatch between the tracker coordinate system and the real world leads to an unnatural mapping between the movements of the controller and the ray, which negates one of the strengths of raycasting. Furthermore, IMUs cannot provide data related to the handheld con-troller’s actual location. A raycasting solution us-ing rotational data as provided by an external IMU should address the following challenges:

■ Compensate for noise-induced jitter or inter-ference.

■ Acknowledge the continuous effect of sensor drift.

■ Recognize the changing relation between the co-ordinate systems of the handheld controller and the virtual content.

■ Provide an alternative origin for the virtual ray given that the actual controller position cannot be tracked.

■ Propose disambiguation techniques that leverage the degrees of freedom available to the controller.

GyroWand DesignGyroWand is a raycasting technique for self-contained AR HMDs. GyroWand directs a ray originated at a predefined location in virtual space using rotational data acquired from an IMU in the

user’s hand (see Figure 1). This section explains how GyroWand addresses the five challenges we just described.

Dynamic ApexTo address the first challenge, the effect of noise on jitter and selection accuracy, GyroWand uses a dual-pronged approach. First, it filters IMU data with a moving window from the last five data points. This approach reduces jitter while main-taining the isomorphism between the movements of the controller and the ray. Second, we use a cone shape with a 2-degree aperture apex (as in earlier work8). Moreover, users can decrease the aperture apex to 0 degrees by twisting the con-troller 45 degrees inward (pronation). Similarly, users can increase the aperture apex to 6 degrees by twisting the controller 45 degrees outward (su-pination) (see Figure 2).

Alternatives to the manual control of the aper-ture apex also exist. For example, the system could monitor the noise level and automatically adjust the apex to the noise, providing a wider apex when the noise level is high and a narrow one when the noise level is low. Another alternative would be to

Figure 1. GyroWand enables raycasting in head-mounted displays (HMDs). In this example the ray origin is the user’s chin. The ray direction is controlled using the inertial measurement unit (IMU) on the handheld controller.

Hand twist Ray apex–45°

+45°

0 cm

3.5 cm

10.5 cm

1 m away

Figure 2. Pronation and supination control apex aperture. These help reduce jitter while maintaining the isomorphism between the movements of the controller and the ray.

g2spa.indd 91 2/22/16 10:37 PM

Page 3: GyroWand: An Approach to IMU-Based Raycasting …...real world. At the core of this new generation of devices are low-cost tracking technologies using techniques such as marker-based

92 March/April 2016

Spatial Interfaces

increase the apex aperture according to the phase of the selection task. In the ballistic phase, when the user is moving from one target to another with large movements, the apex can be wider. In the corrective phase, when the user is refining the selection with small movements, the apex can be narrower.

A State-Machine for RaycastingGyroWand addresses the second (drift) and third (coordinate mapping) challenges using the state machine presented in Figure 3. GyroWand is al-ways in one of four states: anchor, active, out-of-sight, and disambiguation. GyroWand transitions between states automatically via timeouts or when the user manually rotates or touches the control-ler. The initial anchor state is set during the ini-tialization process. In the anchor state, GyroWand points its ray to a predefined coordinate in space relative to the HMD called the anchor point. When the user moves or rotates his or her head in virtual space, the anchor point remains at the same posi-tion relative to the HMD and therefore the ray seems static to the user. Rotational data has no ef-fect on the ray direction. The user interface should clearly show the ray as anchored and therefore dis-abled for interaction.

Tapping on the controller’s touch pad activates the ray, transitioning from the anchor state to the active state. On transitioning, GyroWand captures the controller’s orientation and uses it as the ini-tial or baseline rotation—that is, the rotation at which the active ray points at the anchoring point. All further rotational movement of the controller affects the ray direction. GyroWand sends a hover event to all the virtual objects it crosses. A tap event (< 150 ms) sends a selection notification to the hovered object closest to the ray.

Users can transition back to the anchor state in two ways: an immobility timeout and a TwistUp gesture. The immobility timeout (for example, 5 seconds) changes the state back to anchor when the total rotation on all axes is smaller than a given threshold. This timeout serves as an implicit interaction to disable the ray when the user places the controller on a surface (such as a table). The user can also issue a TwistUp gesture (akin to pull-ing a fishing rod).

In the presence of drift, a ray slowly changes direction even as the physical controller remains still, eventually leading to the ray leaving the dis-play area. Typically, a user compensates for drift by rotating the controller in the opposite direction. However, when drift is large, users can reset the initial rotation by moving back into the anchor state (using TwistUp gestures) and back again into the active state (by tapping the touch pad). Mov-ing between the anchor and active states, thus set-ting the handheld controller’s initial or baseline rotation, helps users to manually deal with con-flicting coordinate systems. If the user is moving around while the GyroWand is active, he or she can reset the baseline rotation by going back into anchor state and back again to active.

The GyroWand’s ray can at times be out of the user’s view—that is, outside the small FOV of the HMD. For example, the Epson Moverio BT-200 has a diagonal FOV of 23 degrees (approximately 11.2 degrees vertical). Accumulated drift or user movement in the real world easily causes the ray to exit this small window. User interactions with the handheld controller or a change of hand pos-ture to reduce fatigue can also lead to a controller orientation that takes the ray of out sight. Recov-ering from the situation when the ray is out of the user’s FOV can be challenging because the user

Rotate/tap

Rotate

Rotate

Tap

Rotate

TwistUp ortap end

Tap start

TwistUp orRotate time out

TwistUp or Time out

Out ofsight

Anchor Active Disambiguation

Figure 3. State machine for IMU-based raycasting. The four states are anchor, active, out-of-sight, and disambiguation.

g2spa.indd 92 2/22/16 10:37 PM

Page 4: GyroWand: An Approach to IMU-Based Raycasting …...real world. At the core of this new generation of devices are low-cost tracking technologies using techniques such as marker-based

IEEE Computer Graphics and Applications 93

has no indication of how the handheld controller must be rotated to bring the ray back into view. GyroWand enters into the out-of-sight state auto-matically when it judges the ray to be out of the HMD’s FOV. In the out-of-sight state, touch input is ignored to avoid triggering selection events on virtual objects out of the user’s view. The user can bring the GyroWand back into the active state by rotating the controller until the ray is back into view. The user can also issue a TwistUp gesture to anchor the ray and then go back to active state. Fi-nally, GyroWand transitions into the anchor state automatically after a given time threshold (about 5 seconds). This transition aims at reducing the time users spend “looking” for the ray.

The disambiguation state helps users refine the actual target they want to select. GyroWand goes into the disambiguation state only when a tap is started from the active state and the ray crosses several virtual objects. In the disambiguation state, the ray is locked on the position and orien-tation where the tap started. The default selection target changes from the object closest to the ray to the one closest to the ray origin. Users can use any disambiguation method to iterate over the high-lighted objects and specify a new selection target. Disambiguation ends when the user releases the finger from the touch pad, issuing the selection event on the current selection target and return-ing to the active state.

On-Body Raycast OriginGiven that GyroWand uses only rotational data, it does not know the spatial location of the con-troller in the real world. Therefore, we must de-termine a suitable body location as the ray origin. Our initial idea was to use the eyes’ center as the origin. However, the visual representation of such a ray is confusing at best: the ray is shown too close to the users’ eyes, resulting in poor 3D visual effect (often visible only to one eye or hard to fuse stereoscopically), it occupies considerable display real estate, and it causes eye strain. Therefore, we explored different body locations (see Figure 4).

The first ray origin location we considered was the middle side, which is located at (15, –40, 20)—that is, 15 cm to the user’s dominant side, 40 cm below the virtual camera, and 20 cm in front of the user. This location most closely resembles a user’s hand position when he or she is holding the HMD controller while standing. The second ray origin location was the chest at (0, –30, 10). This location is similar to the way smartphones are held when the user is typing. The third ray origin location was the shoulder at (15, –20, 0).

The final ray origin location was the chin at (0, –12, 0). The chin location reduces the effect of the distance to the target on the ray direction. For example, when content is close to the eyes, the controller would have to be held almost vertical when coming from the chest and almost lateral when coming from the shoulder. These orienta-tions cause a mismatch between the hand move-ment and the user’s capacity to acquire a target. A ray coming from the chin will almost always move forward and within the HMD FOV. None-theless, these differences minimize rapidly as the distance to the target increases.

Our experimental results showed that an origin near to the user’s chin is the best choice regardless of the distance to the targets. The chin location was better than the other origins not only in terms of completion time, but we also observed a low er-ror rate. Moreover, participants did not report any extra fatigue when the ray came from the chin, compared with other origins.

An interesting observation is that raycasting from the actual controller location yielded some of the worst performance metrics—the slowest com-pletion time and a higher error rate. This means that users actually struggled with selecting con-tent within arm’s reach with a ray coming from their hands, even when they could accommodate the controller in a more convenient position. This result could have important implications on the use of raycasting in other scenarios such as virtual

HMD origin(0, 0, 0)

Chin(0, –12, 0)

Shoulder(15, –20, 0)

Chest(0, –30, 10)

External(from tracker)

Middle side(15, –40, 20)

Figure 4. Possible virtual origins for GyroWand’s ray. For each ray origin, we list the distance from the HMD origin in centimeters.

g2spa.indd 93 2/22/16 10:37 PM

Page 5: GyroWand: An Approach to IMU-Based Raycasting …...real world. At the core of this new generation of devices are low-cost tracking technologies using techniques such as marker-based

94 March/April 2016

Spatial Interfaces

environments. (A more detailed experiment analy-sis of this design is available in previous work.7)

IMU-Based DisambiguationCommon approaches to raycasting disambiguation include marking menus and movement along the ray’s axis.8,9 Although the use of marking menus such as the Flower Ray8 and similar approaches is an interesting direction, we are interested in approaches that do not require user interface changes. Moreover, moving the controller along the ray’s axis is not an option because our design does not use external tracking, and IMU-based estimations of spatial movement are still unreli-able for the consumer-level IMUs included in off-the-shelf HMD devices. Therefore, we focus our exploration on techniques that use IMU-based ro-tational movement and touch input.

Lock and DragFigure 5 shows our first technique: Lock and Drag. Using the LD approach, the user directs the ray toward a set of objects including the object to select. These “hovered” objects provide graphical feedback. Putting a finger on the touch pad tran-sitions GyroWand into the disambiguation state, locking the ray and providing feedback about the new state. As long as the finger presses on the touch pad, the ray position and orientation remain

unchanged. Initially, the object closest to the ray is the selection candidate. If the user does not re-move the finger within a given ∆t (< 150 ms), the object closest to the ray origin becomes the selec-tion candidate. If the user drags the finger in any direction away from the initial point of contact and after a set distance threshold (50 pixels), the next object away from the origin becomes the se-lection candidate.

Therefore, to select the first object on the ray path, a simple tap plus ∆t is enough. Dragging the finger one distance threshold selects the second object, two thresholds select the third, and so on. Dragging the finger back to the point of contact moves the selection candidate back to the previ-ous object. Selection is triggered when the user removes the finger from the touch pad (see the magenta circle in Figure 5).

The locking gesture is a source of errors in itself. The finger movement needed to activate locking is often accompanied by a hand rotation and some-times a full-body movement, a phenomenon that is known as the Heisenberg effect.10 GyroWand corrects for the Heisenberg effect by correcting the ray’s rotation and position to their respective values 50 milliseconds before locking the ray and triggering the selection event. Nonetheless, users might still activate the ray at the wrong position because of their manipulation of the controller or normal operational or perceptual errors. We minimized that possibility by limiting the locking operation to situations in which the ray is actu-ally intercepting an object and the intersection point with such an object is inside the users’ FOV. Users can unlock the ray (out of the disambigua-tion state and into the active state) by issuing a TwistUp gesture.

Lock and TwistLock and Twist (LT) leverages the extra degree of freedom in wrist movement that is not used by the GyroWand (see Figure 6). GyroWand uses flexion to rotate the ray along the x axis and ulnar and radial deviation for rotation along the y axis. LT leverages pronation and supination (a twist of the wrist) as an available degree of freedom.

Similar to the previous technique, the user points the ray toward a set of objects before moving into the disambiguation state. The user changes the se-lection candidate by turning the wrist outwards (supination) by a given threshold. Based on ear-lier work,11 we implemented a linear segmentation of the available rotational space into four spaces (60 degrees available for supination, divided by a 15-degree threshold).

Target

Tap start

Tap start + ∆t

Drag

+ Drag

Tap end

Figure 5. Lock and Drag. Placing a finger down start disambiguation. Dragging the finger up and down refines the selection. Lifting the finger confirms the selection. Objects not within the selection cone are in gray, objects in the selection cone are in light blue, the current selection candidate is dark blue, and the selected object is magenta. Pink indicates the active ray, and green indicates the disambiguation ray.

g2spa.indd 94 2/22/16 10:37 PM

Page 6: GyroWand: An Approach to IMU-Based Raycasting …...real world. At the core of this new generation of devices are low-cost tracking technologies using techniques such as marker-based

IEEE Computer Graphics and Applications 95

AutoTwistFigure 7 presents our final disambiguation tech­nique: Autolock and Twist (AT), or simply Auto­Twist. The difference between AT and LT is the way in which the GyroWand enters the disambigua­tion state. In AT the user points the ray toward the set of objects and issues a quick supination movement. After iterative testing, we settled for a rotational speed of 50 degrees/second. Once the trigger twist is detected, the system corrects the ray’s orientation and position to those at the start of the movement.

Experimental AnalysisOur experimental results show LT to be generally fastest compared with the other techniques, with­out any negative effects for error rate and physi­cal exertion. This advantage was clear even when AutoTwist showed a faster disambiguation time. We believe LT has an overall advantage because of the perceived difficulty participants encountered in locking the ray with the AutoTwist technique.

Based on these results, we recommend using LT when the number of targets to disambiguate is small. The reason is that having to navigate through a large list of objects by twisting the wrist does not scale well and leads to uncomfortable hand positions. (A more detailed experiment anal­ysis of this design is available in previous work.7)

In the future, one possible way to deal with scal­ability is a disambiguation mechanism that com­bines the LT and LD approaches. Yet, allowing the user to switch to the more time­consuming fin­ger dragging method in combination with a twist movement is likely inconvenient.

Our work has revealed three main implications of the GyroWand design. First, IMU sensing

alone is a suitably good approach for target selec­tion in AR HMD platforms. Second, raycasting using IMU data can originate at a location away from the controller. An origin close to the user’s chin is a good candidate. This is particularly im­portant for close targets, as we tested in our study. Our results support earlier research12 that found that origins close to the eye to provide “good per­formance for both targeting and tracing tasks.”

Lastly, using small hand rotations, particularly wrist pronation and supination, are good design options for disambiguating targets laid out in 3D. In combination, a mechanism to switch into a disambiguation mode, such as the LT approach, provides a suitable balance to mitigate unwanted movement and allow precise selection.

References 1. Y. Nakazato, M. Kanbara, and N. Yokoya, “Wearable

Augmented Reality System Using Invisible Visual Markers and an IR Camera,” Proc. 9th IEEE Int’l Symp. Wearable Computers (ISWC), 2005, pp. 198–199.

2. J. Ventura et al., “Global Localization from Monocular SLAM on a Mobile Phone,” IEEE Trans. Visualization and Computer Graphics, vol. 20, no. 4, 2014, pp. 531–539.

3. A. Mulloni, H. Seichter, and D. Schmalstieg, “Handheld Augmented Reality Indoor Navigation with Activity­Based Instructions,” Proc. 13th Int’l

Target

Tap start

Tap start + ∆t

Twist

+ Twist

Tap end

Figure 6. Lock and Twist. Placing a finger down starts disambiguation, supination and pronation refine the selection, and lifting the finger confirms the selection.

Target

Twist trigger

Twist

+ Twist

Tap

Figure 7. AutoTwist. A quick twist starts disambiguation, supination refines the selection, and a tap confirms the selection.

g2spa.indd 95 2/22/16 10:37 PM

Page 7: GyroWand: An Approach to IMU-Based Raycasting …...real world. At the core of this new generation of devices are low-cost tracking technologies using techniques such as marker-based

96 March/April 2016

Spatial Interfaces

Conf. Human Computer Interaction with Mobile Devices and Services (MobileHCI), 2011, pp. 211–220.

4. P. Zhou, M. Li, and G. Shen, “Use It Free: Instantly Knowing Your Phone Attitude,” Proc. 20th Ann. Int’l Conf. Mobile Computing and Networking (MobiCom), 2014, 605-616.

5. J.D. Hincapié-Ramos et al., “Consumed Endurance: A Metric to Quantify Arm Fatigue of Mid-Air Interactions,” Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI), 2014, pp. 1063–1072.

6. F. Argelaguet and C. Andujar, “A Survey of 3D Object Selection Techniques for Virtual Environments,” Computers & Graphics, vol. 37, no. 3, 2013, pp. 121–136.

7. J.D. Hincapié-Ramos et al., “GyroWand: IMU-based Raycasting for Augmented Reality Head-Mounted Displays,” Proc. 3rd ACM Symp. Spatial User Interaction (SUI), 2015, pp. 89–98.

8. T. Grossman and R. Balakrishnan, “The Design and Evaluation of Selection Techniques for 3D Volumetric Displays,” Proc. 19th Ann. ACM Symp. User Interface Software and Technology (UIST), 2006, pp. 3–12.

9. K. Hinckley et al., “A Survey of Design Issues in Spatial Input,” Proc. 7th Ann. ACM Symp. User Interface Software and Technology (UIST), 1994, pp. 213–222.

10. D.A. Bowman et al., “Using Pinch Gloves™ for Both Natural and Abstract Interaction Techniques in Virtual Environments,” Proc. HCI Int’l, 2001, pp. 629–633.

11. M. Rahman et al., “Tilt Techniques: Investigating the Dexterity of Wrist-Based Input,” Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI), 2009, pp. 1943–1952.

12. R. Jota et al., “A Comparison of Ray Pointing Techniques for Very Large Displays,” Proc. Graphics Interface (GI), 2010, pp. 269–76.

Juan David Hincapié-Ramos completed this work while he was a postdoctoral researcher at the University of Mani-toba and a visiting researcher at Tohoku University. He is now a human-computer interaction researcher at Lenovo. Contact him at [email protected].

Kasim Özacar is a PhD candidate at Tohoku University’s Research Institute of Electrical Communication. Contact him at [email protected].

Pourang P. Irani is a professor of human-computer inter-action at the University of Manitoba. Contact him at irani @cs.umanitoba.ca.

Yoshifumi Kitamura is a professor in the Research Insti-tute of Electrical Communication at Tohoku University and head of the Interactive Content Design Lab. Contact him at [email protected].

Contact department editors Frank Steinicke at [email protected] and Wolfgang Stuerzlinger at w.s @sfu.ca.

Selected CS articles and columns are also available

for free at http://ComputingNow.computer.org.

revised 10 February 2016

PURPOSE: The IEEE Computer Society is the world’s largest association of computing

professionals and is the leading provider of technical information in the field.

MEMBERSHIP: Members receive the monthly magazine Computer, discounts, and

opportunities to serve (all activities are led by volunteer members). Membership is open to

all IEEE members, affiliate society members, and others interested in the computer field.

COMPUTER SOCIETY WEBSITE: www.computer.org

Next Board Meeting: 6–10 June 2016, Buckhead, Atlanta, GA, USA

EXECUTIVE COMMITTEEPresident: Roger U. Fujii

President-Elect: Jean-Luc Gaudiot; Past President: Thomas M. Conte;

Secretary: Gregory T. Byrd; Treasurer: Forrest Shull; VP, Professional and Educational

Activities: Andy T. Chen; VP, Member & Geographic Activities: Nita K. Patel;

VP, Publications: David S. Ebert; VP, Standards Activities: Mark Paulk;

VP, Technical & Conference Activities: Hausi A. Müller; 2016 IEEE Director & Delegate

Division VIII: John W. Walz; 2016 IEEE Director & Delegate Division V: Harold Javid;

2017 IEEE Director-Elect & Delegate Division VIII: Dejan S. Milojičić

BOARD OF GOVERNORSTerm Expriring 2016: David A. Bader, Pierre Bourque, Dennis J. Frailey, Jill I. Gostin,

Atsuhiro Goto, Rob Reilly, Christina M. Schober

Term Expiring 2017: David Lomet, Ming C. Lin, Gregory T. Byrd, Alfredo Benso,

Forrest Shull, Fabrizio Lombardi, Hausi A. Müller

Term Expiring 2018: Ann DeMarle, Fred Douglis, Vladimir Getov, Bruce M. McMillin,

Cecilia Metra, Kunio Uchiyama, Stefano Zanero

EXECUTIVE STAFFExecutive Director: Angela R. Burgess; Director, Governance & Associate Executive Director: Anne Marie Kelly; Director, Finance & Accounting: Sunny Hwang; Director, Information Technology Services: Ray Kahn; Director, Membership: Eric Berkowitz; Director, Products & Services: Evan M. Butterfield; Director, Sales & Marketing: Chris Jensen

COMPUTER SOCIETY OFFICESWashington, D.C.: 2001 L St., Ste. 700, Washington, D.C. 20036-4928Phone: +1 202 371 0101 • Fax: +1 202 728 9614 • Email: [email protected] Los Alamitos: 10662 Los Vaqueros Circle, Los Alamitos, CA 90720Phone: +1 714 821 8380 • Email: [email protected]

MEMBERSHIP & PUBLICATION ORDERSPhone: +1 800 272 6657 • Fax: +1 714 821 4641 • Email: [email protected]/Pacific: Watanabe Building, 1-4-2 Minami-Aoyama, Minato-ku, Tokyo 107-0062, Japan • Phone: +81 3 3408 3118 • Fax: +81 3 3408 3553 • Email: [email protected]

IEEE BOARD OF DIRECTORSPresident & CEO: Barry L. Shoop; President-Elect: Karen Bartleson; Past President: Howard E. Michel; Secretary: Parviz Famouri; Treasurer: Jerry L. Hudgins; Director & President, IEEE-USA: Peter Alan Eckstein; Director & President, Standards Association: Bruce P. Kraemer; Director & VP, Educational Activities: S.K. Ramesh; Director & VP, Membership and Geographic Activities: Wai-Choong (Lawrence) Wong; Director & VP, Publication Services and Products: Sheila Hemami; Director & VP, Technical Activities: Jose M.F. Moura; Director & Delegate Division V: Harold Javid; Director & Delegate Division VIII: John W. Walz

g2spa.indd 96 2/22/16 10:37 PM