3d user interfaces: theory and practice -

85

Upload: others

Post on 11-Feb-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 2: 3D User Interfaces: Theory and Practice -

3D User Inte3D User Interr facesfaces

Page 3: 3D User Interfaces: Theory and Practice -

This page intentionally left blank

Page 4: 3D User Interfaces: Theory and Practice -

3D User Inte3D User Interr facesfacesTheory and Practice

Doug A. BowmanDoug A. Bowman

ErErnst Kruijfnst Kruijfff

Joseph J. LaVJoseph J. LaViola, Jriola, Jr..

Ivan PoupyrIvan Poupyrevev

Boston • San Francisco • New York • Toronto • MontrealLondon • Munich • Paris • Madrid • CapetownSydney • Tokyo • Singapore • Mexico City

Page 5: 3D User Interfaces: Theory and Practice -

Many of the designations used by manufacturers and sellers to distinguish their products are claimed astrademarks. Where those designations appear in this book, and Addison-Wesley was aware of a trade-mark claim, the designations have been printed with initial capital letters or in all capitals.

The authors and publisher have taken care in the preparation of this book, but make no expressed or im-plied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumedfor incidental or consequential damages in connection with or arising out of the use of the information orprograms contained herein.

The publisher offers discounts on this book when ordered in quantity for bulk purchases and special sales.For more information, please contact:

U.S. Corporate and Government Sales(800) [email protected]

For sales outside of the U.S., please contact:

International Sales(317) [email protected]

Visit Addison-Wesley on the Web: www.awprofessional.com

Library of Congress Cataloging-in-Publication Data

3D user interfaces : theory and practice / Doug A. Bowman . . . [et al.].p. cm.

Includes bibliographical references and index.ISBN 0-201-75867-9 (hardbound : alk. paper)

1. User interfaces (Computer systems) 2. Three-dimensional displaysystems. I. Bowman, Doug A., 1971–

QA76.9.U83A14 2004005.4'37—dc22

2004008403

Copyright © 2005 by Pearson Education, Inc.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or trans-mitted, in any form, or by any means, electronic, mechanical, photocopying, recording, or otherwise,without the prior consent of the publisher. Printed in the United States of America. Published simultane-ously in Canada.

For information on obtaining permission for use of material from this work, please submit a writtenrequest to:

Pearson Education, Inc.Rights and Contracts Department75 Arlington Street, Suite 300Boston, MA 02116Fax: (617) 848-7047

3rd Printing March 2011

Text printed in the United States on recycled paper at Courier Westford in Westford, Massachusetts.ISBN 0-201-75867-9

Page 6: 3D User Interfaces: Theory and Practice -

For Dawn, Drew, and Caroline, my true joys on this earth.—Doug

To my family, for their faith, affection, and support.—Ernst

To my family, with love—they are my life.—Joe

To my parents, for bringing me up.—Ivan

Page 7: 3D User Interfaces: Theory and Practice -

This page intentionally left blank

Page 8: 3D User Interfaces: Theory and Practice -

ContentsContents

Foreword xvPreface xix

PART I FOUNDATIONS OF 3D USER INTERFACES 1Chapter 1 Introduction to 3D User Interfaces 3

1.1. What Are 3D User Interfaces? 31.2. Why 3D User Interfaces? 41.3. Terminology 61.4. Application Areas 81.5. Conclusion 9

Chapter 2 3D User Interfaces: History and Roadmap 112.1. History of 3D UIs 112.2. Roadmap to 3D UIs 14

2.2.1. Areas Informing the Design of 3D UIs 152.2.2. 3D UI Subareas 182.2.3. Areas Impacted by 3D UIs 22

2.3. Scope of This Book 252.4. Conclusion 26

PART II HARDWARE TECHNOLOGIES FOR 3D USER INTERFACES 27Chapter 3 3D User Interface Output Hardware 29

3.1. Introduction 293.1.1. Chapter Roadmap 30

3.2. Visual Displays 313.2.1. Visual Display Characteristics 313.2.2. Depth Cues 343.2.3. Visual Display Device Types 40

vii

Page 9: 3D User Interfaces: Theory and Practice -

3.3. Auditory Displays 593.3.1. 3D Sound Localization Cues 593.3.2. 3D Sound Generation 623.3.3. Sound System Configurations 643.3.4. Audio in 3D Interfaces 66

3.4. Haptic Displays 683.4.1. Haptic Cues 683.4.2. Haptic Display Characteristics 703.4.3. Haptic Display Types 713.4.4. Haptic Displays in 3D Interfaces 77

3.5. Design Guidelines: Choosing Output Devices for 3D User Interfaces 77

3.6. Conclusion 83

Chapter 4 3D User Interface Input Hardware 874.1. Introduction 87

4.1.1. Input Device Characteristics 884.1.2. Chapter Roadmap 89

4.2. Desktop Input Devices 904.2.1. Keyboards 914.2.2. 2D Mice and Trackballs 914.2.3. Pen-Based Tablets 924.2.4. Joysticks 934.2.5. Six-DOF Input Devices for the Desktop 95

4.3. Tracking Devices 964.3.1. Motion Tracking 964.3.2. Eye Tracking 1054.3.3. Data Gloves 106

4.4. 3D Mice 1104.4.1. Handheld 3D Mice 1114.4.2. User-Worn 3D Mice 113

4.5. Special-Purpose Input Devices 1144.6. Direct Human Input 118

4.6.1. Speech Input 1194.6.2. Bioelectric Input 1204.6.3. Brain Input 120

4.7. Home-Brewed Input Devices 1224.7.1. Strategies for Building Input Devices 1224.7.2. Connecting the Home-Brewed Input Device to the

Computer 124

viii Contents

Page 10: 3D User Interfaces: Theory and Practice -

4.8. Choosing Input Devices for 3D Interfaces 1264.8.1. Important Considerations 1264.8.2. Input Device Taxonomies 1284.8.3. Empirical Evaluations 132

PART III 3D INTERACTION TECHNIQUES 135Chapter 5 Selection and Manipulation 139

5.1. Introduction 1395.1.1. Chapter Roadmap 140

5.2. 3D Manipulation Tasks 1405.2.1. Canonical Manipulation Tasks 1415.2.2. Application-Specific Manipulation Tasks 143

5.3. Manipulation Techniques and Input Devices 1435.3.1. Control Dimensions and Integrated Control in

3D Manipulation 1445.3.2. Force versus Position Control 1455.3.3. Device Placement and Form-Factor in

3D Manipulation 1455.4. Interaction Techniques for 3D Manipulation 147

5.4.1. Classifications of Manipulation Techniques 1475.4.2. Interacting by Pointing 1505.4.3. Direct Manipulation: Virtual Hand Techniques 1585.4.4. World-in-Miniature 1625.4.5. Combining Techniques 1635.4.6. Nonisomorphic 3D Rotation 1685.4.7. Desktop 3D Manipulation 171

5.5. Design Guidelines 179

Chapter 6 Travel 1836.1. Introduction 183

6.1.1. Chapter Roadmap 1846.2. 3D Travel Tasks 184

6.2.1. Exploration 1856.2.2. Search 1856.2.3. Maneuvering 1866.2.4. Additional Travel Task Characteristics 187

6.3. Travel Techniques 1886.3.1. Technique Classifications 1886.3.2. Physical Locomotion Techniques 1926.3.3. Steering Techniques 199

Contents ix

Page 11: 3D User Interfaces: Theory and Practice -

6.3.4. Route-Planning Techniques 2066.3.5. Target-Based Techniques 2106.3.6. Manual Manipulation Techniques 2146.3.7. Travel-by-Scaling Techniques 2166.3.8. Viewpoint Orientation Techniques 2176.3.9. Velocity Specification Techniques 2196.3.10. Integrated Camera Controls for Desktop 3D

Environments 2206.4. Design Guidelines 222

Chapter 7 Wayfinding 2277.1. Introduction 227

7.1.1. Chapter Roadmap 2297.2. Theoretical Foundations 229

7.2.1. Wayfinding Tasks 2317.2.2. Types of Spatial Knowledge 2317.2.3. Egocentric and Exocentric Reference Frames 232

7.3. User-Centered Wayfinding Support 2347.3.1. Field of View 2357.3.2. Motion Cues 2357.3.3. Multisensory Output 2367.3.4. Presence 2377.3.5. Search Strategies 237

7.4. Environment-Centered Wayfinding Support 2397.4.1. Environment Design 2397.4.2. Artificial Cues 242

7.5. Evaluating Wayfinding Aids 2507.6. Design Guidelines 2517.7. Conclusion 253

Chapter 8 System Control 2558.1. Introduction 255

8.1.1. Human Factors of System Control 2578.1.2. Input Devices 2578.1.3. System- and Application-Level Factors 2588.1.4. Chapter Roadmap 258

8.2. Classification 2598.3. Graphical Menus 260

8.3.1. Techniques 2608.3.2. Design and Implementation Issues 2658.3.3. Practical Application 267

x Contents

Page 12: 3D User Interfaces: Theory and Practice -

8.4. Voice Commands 2688.4.1. Techniques 2688.4.2. Design and Implementation Issues 2688.4.3. Practical Application 269

8.5. Gestural Commands 2708.5.1. Techniques 2718.5.2. Design and Implementation Issues 2728.5.3. Practical Application 273

8.6. Tools 2738.6.1. Techniques 2748.6.2. Design and Implementation Issues 2768.6.3. Practical Application 277

8.7. Multimodal System Control Techniques 2788.8. Design Guidelines 2808.9. Case Study: Mixing System Control Methods 282

8.9.1. The ProViT Application 2828.9.2. System Control Design Approach for ProViT 2838.9.3. Mapping of Tasks to Devices 2838.9.4. Placement of System Control 2848.9.5. System Control Feedback 284

8.10. Conclusion 285

Chapter 9 Symbolic Input 2879.1. Introduction 287

9.1.1. Why Is Symbolic Input Important? 2889.1.2. Scenarios of Use 2889.1.3. Brief History of Symbolic Input 2909.1.4. Distinctive Features of Symbolic Input in 3D UIs 2919.1.5. Chapter Roadmap 292

9.2. Symbolic Input Tasks 2939.2.1. Alphanumeric Input 2939.2.2. Editing Alphanumeric Symbols 2939.2.3. Markup Input 294

9.3. Symbolic Input Techniques 2949.3.1. Keyboard-Based Techniques 2949.3.2. Pen-Based Techniques 3009.3.3. Gesture-Based Techniques 3039.3.4. Speech-Based Techniques 304

9.4. Design Guidelines 3069.5. Beyond Text and Number Entry 310

Contents xi

Page 13: 3D User Interfaces: Theory and Practice -

PART IV DESIGNING AND DEVELOPING 3D USER INTERFACES 311Chapter 10 Strategies for Designing and Developing 3D User

Interfaces 31310.1. Introduction 313

10.1.1. Designing for Humans 31410.1.2. Inventing 3D User Interfaces 31410.1.3. Chapter Roadmap 315

10.2. Designing for Humans 31510.2.1. Feedback in 3D User Interfaces 31510.2.2. Constraints 32210.2.3. Two-Handed Control 32310.2.4. Designing for Different User Groups 32710.2.5. Designing for User Comfort 328

10.3. Inventing 3D User Interfaces 33010.3.1. Borrowing from the Real World 33110.3.2. Adapting from 2D User Interfaces 33510.3.3. Magic and Aesthetics 340

10.4. Design Guidelines 345

Chapter 11 Evaluation of 3D User Interfaces 34911.1. Introduction 349

11.1.1. Purposes of Evaluation 35011.1.2. Terminology 35111.1.3. Chapter Roadmap 351

11.2. Background 35111.2.1. Tools for Evaluation Design and Implementation 35211.2.2. Evaluation Methods Used for 3D Interfaces 354

11.3. Evaluation Metrics for 3D Interfaces 35711.3.1. System Performance Metrics 35711.3.2. Task Performance Metrics 35811.3.3. User Preference Metrics 358

11.4. Distinctive Characteristics of 3D Interface Evaluation 36011.4.1. Physical Environment Issues 36011.4.2. Evaluator Issues 36211.4.3. User Issues 36311.4.4. Evaluation Type Issues 36511.4.5. Miscellaneous Issues 367

11.5. Classification of 3D Evaluation Methods 36711.6. Two Multimethod Approaches 369

11.6.1. Testbed Evaluation Approach 370

xii Contents

Page 14: 3D User Interfaces: Theory and Practice -

11.6.2. Sequential Evaluation Approach 37511.6.3. Comparison of Approaches 378

11.7. Guidelines for 3D Interface Evaluation 38211.7.1. General Guidelines 38211.7.2. Guidelines for Formal Experimentation 383

PART V THE FUTURE OF 3D USER INTERFACES 385Chapter 12 Beyond Virtual: 3D User Interfaces for the Real World 387

12.1. Introduction 38712.1.1. What Is Augmented Reality? 38912.1.2. Bringing Virtual Interfaces into the Real World 39012.1.3. Chapter Roadmap 391

12.2. AR Interfaces as 3D Data Browsers 39112.3. 3D Augmented Reality Interfaces 39412.4. Augmented Surfaces and Tangible Interfaces 39512.5. Tangible AR Interfaces 397

12.5.1. Design of Tangible AR 39812.5.2. Time-Multiplexed Interaction in Tangible AR 40012.5.3. Advantages and Disadvantages of Tangible AR 402

12.6. Agents in AR 40312.7. Transitional AR-VR Interfaces 40412.8. Conclusion 405

Chapter 13 The Future of 3D User Interfaces 40713.1. Questions about 3D UI Technology 40713.2. Questions about 3D Interaction Techniques 41013.3. Questions about 3D UI Design and Development 41213.4. Questions about 3D UI Evaluation 41513.5. Million-Dollar Questions 416

Appendix A Quick Reference Guide to 3D User Interface Mathematics 419A.1. Scalars 420A.2. Vectors 420A.3. Points 421A.4. Matrices 422A.5. Quaternions 424

Bibliography 429

Index 457

Contents xiii

Page 15: 3D User Interfaces: Theory and Practice -

This page intentionally left blank

Page 16: 3D User Interfaces: Theory and Practice -

ForForeworewordd

xv

Three-dimensional user interfaces are finally receiving their due! Re-search in 3D interaction and 3D display began in the 1960s, pioneered byresearchers like Ivan Sutherland, Bob Sproull, Fred Brooks, AndrewOrtony, and Richard Feldman. Although many commercially successful3D applications exist—computer-aided design and simulation, radiationtherapy, drug discovery, surgical simulation, scientific and informationvisualization, entertainment—no author or group of authors has writtena comprehensive and authoritative text on the subject, despite a continu-ing and rich set of research findings, prototype systems, and products.

Why is that? Why is it that this book by Doug Bowman, Ernst Kruijff,Joe LaViola, and Ivan Poupyrev is the first thorough treatment of 3D UIs?

Perhaps it was our digression during the last 20 years to the WIMPGUI. After all, the Windows, Icons, Menus, and Pointers GUI is usedvery widely by millions of users. Mac OS and Microsoft Windows usersknow it well, as do many UNIX users. Indeed, every user of the Webworks with a GUI, and this year there are many hundreds of millions ofthem. Two-dimensional GUIs will be with us for a long time. After all, alot of the workaday world with which we deal is flat—not just our Webpages but our documents, presentations, and spreadsheets too. Yes,some of these can be extended to 3D, but most of the time, 2D is just fine,thank you very much. Furthermore, pointing and selecting and typingare relatively fast and relatively error-free—they work, and they work well.

Page 17: 3D User Interfaces: Theory and Practice -

Perhaps it is that not as many people use 3D GUIs as use the 2DWIMP GUI, and so they are not thought to be as important. But the abovelist of 3D applications involves multibillion-dollar manufacturing indus-tries, such as aerospace and automotive, and equally large and even moreimportant activities in the life-saving and life-giving pharmaceutical andhealth care industries.

Perhaps it was that we needed the particular set of backgrounds thatDoug, Joe, Ivan, and Ernst bring to the table. Doug comes out of the GVUCenter at Georgia Tech, where he worked on 3D UIs with Larry Hodgesand others and learned the value of careful user studies and experimen-tation, and he is now a member of an influential HCI group at VirginiaTech; Joe works at Brown with Andy van Dam, a long-time proponent ofrich 3D interaction; Ivan comes from the HIT Lab at the University ofWashington, where he worked with Tom Furness and Suzanne Weghorst,and now works with Jun Rekimoto at Sony CSL; and Ernst works withMartin Goebel in the VE Group at Fraunhofer IMK in Germany.

Whatever the case, I am excited and pleased that this team has givenus the benefit of their research and experience. As I reviewed the draftmanuscript for this book, I jotted down some of the thoughts that cameto my mind: comprehensive, encyclopedic, authoritative, taxonomic;grounded in the psychological, HCI, human factors, and computergraphics literature; grounded in the personal research experiences of theauthors, their teachers, and their students.

I myself have long preached the importance of integrating the studyof the computer with the study of the human. Indeed, this is the keypremise on which I built the GVU Center at Georgia Tech. This bookcertainly follows that admonition. There are numerous discussions ofhuman issues as they relate to 3D navigation and interaction, drawing onreferences in psychology and human factors.

This is indeed a book for both practitioners and researchers. The ex-tensive literature reviews, examples, and guidelines help us understandwhat to do now. Combined with the research agenda in Chapter 13, TheFuture of 3D User Interfaces, the material also helps us have a sense ofwhat it is that we do not yet know.

I particularly commend to readers the Chapter 11 discussion of evalu-ating 3D UIs. We in the computer graphics community have tended todesign devices and techniques and then “throw them over the wall” tothe user community. This is not the route to success. Careful study of userneeds coupled with evaluation as part of the ongoing design cycle ismuch more likely to lead to effective techniques. The authors, all of

xvi Foreword

Page 18: 3D User Interfaces: Theory and Practice -

whom have grappled with the difficult task of designing 3D interfaces,know from first-hand experience how crucial this is. Their section 11.4, onthe distinctive characteristics of the 3D interface evaluation process, is awonderful codification of that first-hand knowledge.

Thanks to Doug and Ernst and Joe and Ivan!

Jim FoleyGVU CenterCollege of ComputingGeorgia TechMarch 2004

Foreword xvii

Page 19: 3D User Interfaces: Theory and Practice -

This page intentionally left blank

Page 20: 3D User Interfaces: Theory and Practice -

PrPrefaceeface

xix

An architect sits in her home office, putting the final touches on the de-sign of the new entrance to the city park. A three-dimensional virtualmodel of the park appears in front of her on the desk’s surface. Shenudges a pathway slightly to the right to avoid a low-lying area, andthen makes the model life-size so she can walk along the path to view theeffect. “Those dark colors on the sign at the entrance are too foreboding,”she thinks, so she quickly changes the color palette to brighter primarycolors. She looks up and notices that the clients are arriving for the finaldesign review meeting. They are located in other offices around the city,but they can all view the 3D model and make suggested changes, as wellas communicate with one another. “What’s the construction plan?” asksone of the clients. The architect starts an animation showing the progressof the project from start to finish. “That first step may not work,” says theclient. “The excavation is much too close to the existing playground. Letme show you.” He looks out his window, which has a view of the park,and overlays the virtual construction plan on it. “You’re right,” says thearchitect, “let’s plan to move the playground slightly—that will be muchcheaper than changing the construction site.” After viewing the effects ofthe change, all agree that this plan will work, and the meeting adjourns.

This scenario and others like it illustrate the enormous potential of3D environments and applications. The technology to realize such a visionis available now, although it will certainly be improved. But the scenario

Page 21: 3D User Interfaces: Theory and Practice -

also leaves out a great deal of information—information that is crucial tomaking this dream a reality. How did the architect load the park model,and how does she manipulate her view of it? What technique is used tochange the pathway? How can multiple clients all manipulate the modelat the same time? How do the participants appear to each other in the vir-tual space? How is the speed and playback of the animation controlled?How did the client instruct the system to merge the real and virtualscenes?

These questions all relate to the design of the user interface (UI) andinteraction techniques for this 3D application, an area that is usually givenonly a cursory treatment in futuristic films and books. The scenarios usu-ally either assume that all interaction between the user and the systemwill be “natural”—based on techniques like intuitive gestures and speech—or “automatic”—the system will be so intelligent that it will deduce theuser’s intentions. But is this type of interaction realistic, or even desirable?

This book addresses the critical area of 3D UI design—a field that seeksto answer detailed questions, like those above, that make the differencebetween a 3D system that is usable and efficient and one that causes userfrustration, errors, and even physical discomfort. We present practical in-formation for developers, the latest research results, easy-to-follow guide-lines for the UI designer, and relevant application examples. Althoughthere are quite a few books devoted to UIs in general and to 2D UI designin particular, 3D UIs have received significantly less attention. The resultsof work in the field are scattered throughout numerous conference pro-ceedings, journal articles, single book chapters, and Web sites. This fielddeserves a reference and educational text that integrates the best practicesand state-of-the-art research, and that’s why this book was created.

How This Book Came to Be

The story of this book begins in April 1998, when Ivan Poupyrev andDoug Bowman were doctoral students at Hiroshima University andGeorgia Tech respectively, working on 3D interaction techniques for ob-ject manipulation in virtual environments (VEs). We started a lively emaildiscussion about the design and usability of these techniques and about3D UIs in general. Ivan, who was at the time a visiting research student atthe University of Washington, suggested that the discussion would beeven more profitable if other researchers in this new area could join in as

xx Preface

Page 22: 3D User Interfaces: Theory and Practice -

well, and so the 3DUI mailing list was born. Since that time, over 100 re-searchers from around the globe have joined the list and participated inthe discussion (to see an archive of all the list traffic or to join the list,check out http://www.3dui.org). Joe LaViola and Ernst Kruijff were twoof the first people to join the list.

In August of that same year, Doug forwarded to the list a call for tuto-rials for the upcoming IEEE Virtual Reality Conference. After some dis-cussion, Joe, Ivan, and Ernst agreed to join Doug to organize a tutorial on“The Art and Science of 3D Interaction.” The tutorial was a big hit at theconference in Houston, and the four of us continued to present courseson the topic at ACM Virtual Reality Software and Technology 1999, IEEEVR 2000, and ACM SIGGRAPH 2000 and 2001.

After developing a huge amount of content for the notes supple-ments of these courses, we decided it would be silly not to compile andexpand all of this information in book form. Furthermore, there was noway to include all the information available on 3D UIs in a one-daycourse. And that’s why you’re holding this book in your hands today—abook containing information on 3D UIs that can’t be found in any othersingle source.

What’s in the Book

The title of this book emphasizes that we have written it for bothacademics/researchers and practitioners/developers—both those inter-ested in basic research and those interested in applications. Most chaptersof the book integrate both theory and practical information. We intendthe book to be used both as a textbook (see suggestions below) and as areference work.

Theory-related content includes the following:

• Sections on the psychology and human factors of various 3Dinteraction tasks

• Information on different approaches for the evaluation of 3D UIs(Chapter 11)

• Results from empirical studies of 3D interaction techniques• A research agenda for 3D interaction (Chapter 13)• Lists of recommended further reading at the end of most chapters• A comprehensive bibliography of important research articles

Preface xxi

Page 23: 3D User Interfaces: Theory and Practice -

Practice-related content includes the following:

• Principles for choosing appropriate input and output devices for3D systems (Chapters 3 and 4)

• Details and helpful tips for the implementation of common 3Dinteraction techniques

• Guidelines for the selection of interaction techniques for com-mon 3D tasks

• Case studies of 3D UIs in real-world applications

The book is organized into five parts. Part I introduces the topic of 3DUIs. Part II discusses the input and output device technology used in thedevelopment of 3D UIs, with an emphasis on the impact of these deviceson usability and performance. Part III presents a wide range of 3D inter-action techniques for the common tasks of navigation, selection and ma-nipulation, system control, and symbolic input. In Part IV, we discuss thedesign, development, and evaluation of complete 3D UI metaphors andapplications. Finally, Part V considers the future, with chapters on 3D in-teraction in augmented reality applications and a research agenda for 3DUIs. The appendix includes information on required mathematical back-ground and is followed by a bibliography of 3D UI references.

Throughout the book, we offer several special features. First, mostchapters contain numerous guidelines—practical and proven advice forthe designer and developer. Guidelines are indicated in the text like this:

We also include implementation details for many of the most commonand useful interaction techniques. We describe these algorithms using acombination of textual and mathematical descriptions (to avoid a bias to-ward any particular development tool or programming style).

How to Use the Book and Related Material

If you are a 3D UI developer: Professional developers can use the book forinspiration and guidance in the design, implementation, and evaluationof applications with 3D UIs. In the design process, developers can consider

Follow the guidelines in this book to help you design usable3D UIs.

xxii Preface

Page 24: 3D User Interfaces: Theory and Practice -

overall UI metaphors from Part IV, choose specific interaction techniquesfrom Part III, and match these with appropriate input and display de-vices from Part II. The design guidelines from all of these sections shouldhelp developers make rational, informed decisions. The implementationof the 3D UI can benefit from the textual and mathematical descriptionsof interaction techniques we provide in Part III. Finally, developers canchoose evaluation methods and assess the usability of their applicationsbased on the information in Chapter 11.

If you are a teacher: The book can also be used as a textbook in severaldifferent types of university-level courses. A graduate course on 3D UIdesign could use it as a primary textbook. A more generic virtual envi-ronments course could use Parts I, II, and III of this book as an introduc-tion to the basic technology and techniques used in VE interaction. Anundergraduate HCI course could pull information from Parts I and IV ina module on 3D interfaces and their differences from traditional UIs. Im-plementation of common techniques from Part III could enhance a courseon interactive 3D graphics.

If you are a researcher: This book can serve as a comprehensive refer-ence guide for researchers engaged in 3D UI design or evaluation, the in-vestigation of 3D applications, or the use of VEs or augmented reality.The research agenda in Chapter 13 also provides researchers and researchstudents with a list of important questions to be addressed in the field. Itcould even be used as the starting point for a PhD student looking for atopic related to 3D UIs.

3D UI design is a fast-moving and evolving field. Therefore, we arecommitted to updating the material in this book. One way we will do thisis through the book’s official Web site at http://www.3dui.org. This sitewill contain information and links related to the latest 3D UI research andapplications, organized in the same manner as the book so you can easilyfind new information about the topics in a particular part or chapter. Thesite will also allow you to join the 3DUI mailing list. We also ask for yourhelp in keeping the book up to date. Send us your comments, clarificationquestions, or links to additional information by visiting the Web siteabove and using the online feedback form. Or email us directly [email protected]. Your comments will help us update the Web site, as wellas future editions of this book.

Preface xxiii

Page 25: 3D User Interfaces: Theory and Practice -

Acknowledgments

This book would not have been possible without the hard work, support,and intelligence of a large group of people.

First, we offer our gratitude to the reviewers who gave their time andenergy in improving the quality of the book. Their comments and sugges-tions have made the book more complete, more readable, and more useful.Thanks to Ben Shneiderman, Harry Hersh, D. Jay Newman, Jeff Pierce,Dieter Schmalstieg, and Bob Zeleznik for providing this invaluable ser-vice. Special thanks go to Jim Foley for his encouragement and support.

Next, we would like to thank our editor at Addison-Wesley, PeterGordon, for his invaluable advice and encouragement. The rest of thestaff, including Bernie Gaffney, Amy Fleischer, Julie Nahil, Heather Mul-lane, and Curt Johnson have also been extremely helpful. Thanks also toSimone Payment and Carol Lallier for their competent and professionalwork during the production phase.

All of us would like to personally thank our colleagues in the 3D UIcommunity for their fruitful discussions and collaborations. They includeMark Mine, Robert Lindeman, Matthew Conway, Ken Hinckley, ShuminZhai, Kiyoshi Kiyokawa, Chris Shaw, Mark Billinghurst, Rudy Darken,Pablo Figueroa, and Bernd Fröhlich.

Portions of this material are based upon work supported by the Na-tional Science Foundation under Grants No. DUE-0127326 and IIS-0237412. Any opinions, findings and conclusions, or recommendationsexpressed in this material are those of the author(s) and do not necessar-ily reflect the views of the National Science Foundation (NSF).

Doug Bowman: I would like to thank my wife, Dawn, for her unfail-ing love and support, as well as my extended family and friends, espe-cially those at Grace Covenant Presbyterian Church. Much gratitude isdue to Joe, Ivan, and Ernst for seeing this project through and for all theiryears of friendship and collaboration. Thanks also go to my colleaguesand students at Virginia Tech, including Chris North, Ron Kriz, MehdiSetareh, Walid Thabet, Thomas Ollendick, David Cox, Debby Hix, JohnKelso, Joe Gabbard, Chad Wingrave, Jian Chen, Nicholas Polys, WendySchafer, and Marcio Pinho. Past colleagues at Georgia Tech also deservethanks. They include Larry Hodges, Drew Kessler, David Koller, DonaldJohnson, Donald Allison, Brian Wills, Jean Wineman, Jay Bolter, ElizabethDavis, Albert Badre, and Ben Watson.

Ernst Kruijff: First of all, my thanks go to Doug, Ivan, and Joe for theirgreat cooperation, help, and extensive discussions. Thanks also go to my

xxiv Preface

Page 26: 3D User Interfaces: Theory and Practice -

parents, my brother, and my sister-in-law for their support, each in theirown way. Furthermore, my thanks go to all of my colleagues at the VEgroup of IMK, especially Martin Goebel for being a great chief, GeroldWesche, Andreas Simon, Gernot Goebbels, Stefan Conrad, Aeldrik Pan-der, and Steffi Beckhaus for their past and current cooperation and helpduring the making of this book and its instigators, the courses we (the au-thors) gave together. My thanks also go to my past colleagues at BauhausUniversity, especially the members of igroup, in particular Holger Regen-brecht. Furthermore, thanks to all the students who helped in many proj-ects, especially Stefan Hansen, Arnold Mueller, Jakob Beetz, and HartmutSeichter. Finally, my thanks go to Dieter Schmalstieg for being patientwith my PhD efforts.

Joe LaViola: I would like to thank my PhD thesis advisor, Andries vanDam, for his forgiveness and patience and for putting up with someoneas hardheaded as me. Thanks also to my mom and dad, Jamie, Nick,Heidi, and my friends Don Carney and Dave Gondek for their love andsupport throughout the writing of this book. My colleagues RobertZeleznik, Daniel Keefe, Daniel Acevedo Feliz, Andrew Forsberg, LoringHolden, Tim Miller, Steve Dollins, Lee Markosian, David Karelitz, TimRowley, Christine Waggoner, and all the members of the Brown Univer-sity Computer Graphics Group past and present also deserve thanks. Fi-nally, thanks to my coauthors for their hard work and dedication.

Ivan Poupyrev: I would never have been able to work on this bookwithout the help and support I received from many people I was fortunateto meet. I am deeply indebted to Professor Tadao Ichikawa, who super-vised my PhD thesis at Hiroshima University. I am also thankful to Profes-sor Masahito Hirakawa for his help and support, as well as to my fellowstudents and researchers Bryn Holmes, Olivier Liechti, and NumadaTomokazu. I am grateful to the Japanese government for providing mewith the Monbusho Scholarship for conducting graduate studies in Japan.

While working on my PhD thesis, I had the exceptional opportunityto spend almost three years at the Human Interface Technology Labora-tory (HITL) at the University of Washington. The HITL experience madean enormous impact on me and formed me as a researcher. For this, I willalways be grateful to Suzanne Weghorst, the Director of Research at theHITL, who invited me in summer 1995 to join HITL first as a summer in-tern and then as a visiting researcher; and Professor Tom Furness III, Di-rector of HITL, who believed in me and provided invaluable moral andfinancial support. I was very fortunate to meet and work with MarkBillinghurst; our collaboration and friendship extended well beyond my

Preface xxv

Page 27: 3D User Interfaces: Theory and Practice -

stay at the HITL. I am also deeply thankful to Edward Miller for his helpin developing a number of HITL 3D UI projects, in particular the V3D in-terface toolkit, as well as to Jerry Prothero and Hunter Hoffman for late-night discussions of chess, psychology, and the meaning of life; TonyEmerson for finding everything that I ever needed; Ann Elias for all hergenerous help while I at was the HITL; Mark Phillips, Jennifer Feyma,and Chris Airola for being friends and giving me life outside the lab.

The augmented reality chapter of this book includes a significantamount of work that I did while at the ATR Media Integration and Com-munication Research Labs in Kyoto. I am very thankful to all the amazingpeople I met and worked with there: Ryohei Nakatsu, Jun Ohya, NobujiTetsutani, Jun Kurumisawa, Tatsumi Sakaguchi, Keiko Nakao, Lew Bald-win, Desney Tan, Sidney Fels, Michael Lyons, Tal Shalif, Parham Zolfa-ghari, Christa Sommerer, and many others.

Finally, I am thankful to my family and friends in Russia, particularlymy parents and brother, Pavel Poupyrev; my first research advisers,Vladimir Lischouk, Dinara Gazizova, and Vladimir Lukin; as well as toSergei Tsimbalist, Vadim Malishev, Vitaly Lazorin, Constantin Guzovski,and Elena Krupskaya.

It is impossible to mention everyone in Japan, the United States, andRussia who helped, supported, and inspired me. Without this help, sup-port, and inspiration I would not be able to take even my current modeststeps toward designing and exploring future interaction technologies.My deepest gratitude and appreciation goes to all of you.

xxvi Preface

Page 28: 3D User Interfaces: Theory and Practice -

CHAPTER 11Evaluation of 3D User InterfacesEvaluation of 3D User Interfaces

349

Most of this book has covered the various aspects of 3D UI design. Wehave addressed questions such as, How do I choose an appropriate inputdevice? How do I support wayfinding in large-scale environments? andWhat object manipulation techniques provide precise positioning? How-ever, one of the central truths of human–computer interaction (HCI) isthat even the most careful and well-informed designs can still go wrongin any number of ways. Thus, evaluation of UIs becomes critical. In fact,the reason we can provide answers to questions such as those above isthat researchers have performed evaluations addressing those issues. Inthis chapter, we discuss some of the evaluation methods that can be usedfor 3D UIs, metrics that help to indicate the usability of 3D UIs, distinc-tive characteristics of 3D UI evaluation, and guidelines for choosingevaluation methods. We argue that evaluation should not only be per-formed when a design is complete, but that it should also be used as anintegral part of the design process.

11.1. Introduction

Evaluation has often been the missing component of research in 3D inter-action. For many years, the fields of VEs and 3D UIs were so novel andthe possibilities so limitless that many researchers simply focused on

Page 29: 3D User Interfaces: Theory and Practice -

developing new devices, interaction techniques, and UI metaphors—ex-ploring the design space—without taking the time to assess how goodthe new designs were. As the fields have matured, however, we are tak-ing a closer look at usability. We must critically analyze, assess, and com-pare devices, interaction techniques, UIs, and applications if 3D UIs areto be used in the real world.

11.1.1. Purposes of Evaluation

Simply stated, evaluation is the analysis, assessment, and testing of an ar-tifact. In UI evaluation, the artifact is the entire UI or part of it, such as aparticular input device or interaction technique. The main purpose of UIevaluation is the identification of usability problems or issues, leading tochanges in the UI design. In other words, design and evaluation shouldbe performed in an iterative fashion, such that design is followed by eval-uation, leading to a redesign, which can then be evaluated, and so on. Theiteration ends when the UI is “good enough,” based on the metrics thathave been set (or, more frequently in real-world situations, when thebudget runs out or the deadline arrives!).

Although problem identification and redesign are the main goals ofevaluation, it may also have secondary purposes. One of these is a moregeneral understanding of the usability of a particular technique, device,or metaphor. This general understanding can lead to design guidelines(such as those presented throughout this book), so that each new designcan start from an informed position rather than from scratch. For exam-ple, we can be reasonably sure that users will not have usability problemswith the selection of items from a pull-down menu in a desktop applica-tion, because the design of those menus has already gone through manyevaluations and iterations.

Another, more ambitious, goal of UI evaluation is the development ofperformance models. These models aim to predict the performance of a useron a particular task within an interface. For example, Fitts’s law (Fitts1954) predicts how quickly a user will be able to position a pointer over atarget area based on the distance to the target, the size of the target, andthe muscle groups used in moving the pointer. Such performance modelsmust be based on a large number of experimental trials on a wide rangeof generic tasks, and they are always subject to criticism (e.g., the modeldoesn’t take an important factor into account, or the model doesn’t applyto a particular type of task). Nevertheless, if a useful model can be devel-oped, it can provide important guidance for designers.

350 Chapter 11 Evaluation of 3D User Interfaces

Page 30: 3D User Interfaces: Theory and Practice -

11.1.2. Terminology

We must define some important terms before continuing with our discus-sion of 3D UI evaluation. The most important term (which we’ve alreadyused a couple of times) is usability. We define usability in the broadestsense, meaning that it encompasses everything about an artifact and aperson that affects the person’s use of the artifact. Evaluation, then, mea-sures some aspects of the usability of an interface (it is not likely that wecan quantify the usability of an interface with a single score). Usabilitymeasures (or metrics) fall into several categories, such as system perfor-mance, user task performance, and user preference (see section 11.3).

There are at least two roles that people play in a usability evaluation.A person who designs, implements, administers, or analyzes an evalua-tion is called an evaluator. A person who takes part in an evaluation byusing the interface, performing tasks, or answering questions is called auser. In formal experimentation, a user is sometimes called a subject.

Finally, we distinguish below between evaluation methods and evalua-tion approaches. Evaluation methods (or techniques) are particular stepsthat can be used in an evaluation. An evaluation approach, on the otherhand, is a combination of methods, used in a particular sequence, to forma complete usability evaluation.

11.1.3. Chapter Roadmap

We begin by providing some background information on usability evalu-ation from the field of HCI (section 11.2). We then narrow the focus to theevaluation of 3D UIs (specifically the evaluation of immersive VEs), look-ing first at evaluation metrics (section 11.3) and then distinctive charac-teristics of 3D UI evaluation (section 11.4). In section 11.5, we classify 3DUI evaluation methods and follow that with a description and compari-son of two comprehensive approaches to 3D UI evaluation—testbedevaluation and sequential evaluation. Finally, we conclude with a set ofguidelines for those performing evaluations of 3D UIs (section 11.7).

11.2. Background

In this section, we describe some of the common tools and methods usedin 3D UI evaluation. None of these tools or methods is new or unique to3D UIs. They have all been used and tested in many other usability evalu-ation contexts. We present them here as an introduction to these topics for

11.2. Background 351

Page 31: 3D User Interfaces: Theory and Practice -

the reader who has never studied HCI. For more detailed information,you can consult any one of a large number of introductory books on HCI(see the recommended reading list at the end of the chapter).

11.2.1. Tools for Evaluation Design and Implementation

The tools presented below are useful for designing, organizing, and im-plementing usability evaluations of 3D UIs.

User Task Analysis

A user task analysis (Hackos and Redish 1998) provides the basis for de-sign in terms of what users need to be able to do with the application. Thisanalysis generates (among other resources) a list of detailed task descrip-tions, sequences, and relationships, user work, and information flow. Typ-ically, a user task analysis is provided by a design and development team,based on extensive input from representative users. Whenever possible, itis useful for an evaluator to participate in the user task analysis.

Scenarios

The user task analysis also shapes representative user task scenariosby defining, ordering, and ranking user tasks and task flow. The accu-racy and completeness of a scenario directly affect the quality of the sub-sequent formative and summative evaluations because these methodstypically do not reveal usability problems associated with a specific inter-action within the application unless it is included in the user task sce-nario (and is therefore performed by users during evaluation sessions).Similarly, in order to evaluate how well an application’s interface sup-ports high-level information gathering and processing, representativeuser task scenarios must include more than simply atomic, mechanical-or physical-level tasking; they should also include high-level cognitive,problem-solving tasking specific to the application domain. This is espe-cially important in 3D UIs, where user tasks generally are inherentlymore complex, difficult, and unusual than in many GUIs.

Taxonomy

Taxonomy is defined as the science of classification, but it has also cometo mean a specific classification scheme. Many different types of tax-onomies have been used in 3D UI research, including multidimensional

352 Chapter 11 Evaluation of 3D User Interfaces

Page 32: 3D User Interfaces: Theory and Practice -

design spaces (Card et al. 1990) and metaphor-based classifications(Poupyrev, Weghorst et al. 1997). The main goal of all of these types is toorganize a particular set of objects so that they can be thought about sys-tematically. Here we focus on a specific type of taxonomy—the tech-nique-decomposition taxonomy.

The concept of technique decomposition is that each interaction taskcan be partitioned (decomposed) into subtasks. Similarly, we can decom-pose the techniques for a particular task into subtechniques, which wecall technique components (Bowman and Hodges 1999). Each techniquecomponent addresses a single subtask (Figure 11.1). We can think of eachsubtask as a question that must be answered by the designer of an inter-action technique, and the set of technique components for a subtask asthe set of possible answers for that question.

The set of technique components for each subtask may be built in twoways. First, we can decompose existing techniques and list the compo-nents for each subtask. Second, we can think of original technique com-ponents that could be used to accomplish each subtask in the taxonomy.

Such technique-decomposition taxonomies have several advantages.Most relevant to the topic of this chapter, the taxonomy can be used as aguide for the evaluation of techniques. In other words, we can performsummative evaluations (Hix and Hartson 1993) that compare techniquecomponents rather than holistic techniques. This means that the resultsof our evaluation will be more precise—we will be able to claim, for ex-ample, that object-manipulation techniques that use the virtual object asthe center of rotation are more precise than those that use the virtualhand as the center of rotation. Of course, this increased precision comes

11.2. Background 353

Task

Subtask Subtask

Sub-subtask Sub-subtask

Subtask

TC TC TC

TC TC TC TCTC

Figure 11.1 Generic technique-decomposition taxonomy. The shaded technique com-ponents can be combined to form a complete interaction technique for the top-level task.

Page 33: 3D User Interfaces: Theory and Practice -

with a cost—we must perform more complex and time-consuming eval-uations. The taxonomy can also be used to design new techniques. Forexample, the four shaded components in Figure 11.1 could be combinedto create a complete interaction technique.

Prototyping

In order to perform a usability evaluation, there must be something toevaluate. In some cases, the full-fledged, final application is available tobe evaluated, but more often, evaluation is (or should be) performed ear-lier in the design cycle so that most problems can be caught early. Thus,many evaluations use some form of prototype.

Prototypes are generally classified based on their level of fidelity—that is, how closely the prototype resembles and acts like the final prod-uct. Somewhat surprisingly, a great deal of useful usability informationcan be gleaned from the evaluation of a low-fidelity prototype such as apaper-based sketch, a storyboard, or a static mockup of the interface. Ingeneral, the fidelity of the prototype will increase with each successiveevaluation.

One important prototyping method for 3D UIs is the so-called Wiz-ard of Oz (WOZ) approach. A WOZ prototype appears to have a largeamount of functionality, even though that functionality is not actuallypresent. A human controls the prototype (like the wizard behind the cur-tain), making it appear more intelligent or high-fidelity than it actuallyis. For 3D UIs, this prototyping method can be quite useful becausethe actual implementation of many 3D interaction techniques and UImetaphors can be very complex. For example, one may not want to go tothe trouble of implementing a full-fledged speech interface if it is onlyone of the options being considered. By developing a simple keyboard-based interface, an evaluator can mimic the actions that would be takenby the system when a user speaks a particular word or phrase and canthus determine the usability characteristics of the actual speech interface.

For more detailed information on prototyping in general, see Hix andHartson (1993).

11.2.2. Evaluation Methods Used for 3D Interfaces

From the literature, we have compiled a list of usability evaluation meth-ods that have been applied to 3D UIs (although numerous referencescould be cited for some of the techniques we present, we have includedcitations that are most recognized and accessible). Most of these methods

354 Chapter 11 Evaluation of 3D User Interfaces

Page 34: 3D User Interfaces: Theory and Practice -

were developed for 2D or GUI usability evaluation and have been subse-quently extended to support 3D UI evaluation.

Cognitive Walkthrough

The cognitive walkthrough (Polson et al. 1992) is an approach to evaluatinga UI based on stepping through common tasks that a user would performand evaluating the interface’s ability to support each step. This approachis intended especially to gain an understanding of the usability of a sys-tem for first-time or infrequent users, that is, for users in an exploratorylearning mode. Steed and Tromp (1998) have used a cognitive walk-through approach to evaluate a collaborative VE.

Heuristic Evaluation

Heuristic or guidelines-based expert evaluation (Nielsen and Molich 1992) isa method in which several usability experts separately evaluate a UI de-sign (probably a prototype) by applying a set of heuristics or designguidelines, that are either general enough to apply to any UI or are tai-lored for 3D UIs in particular. No representative users are involved. Re-sults from the several experts are then combined and ranked to prioritizeiterative design or redesign of each usability issue discovered. The cur-rent lack of well-formed guidelines and heuristics for 3D UI design andevaluation make this approach more challenging for 3D UIs. Examples ofthis approach applied to 3D UIs can be found in Gabbard, Hix, and Swan(1999); Stanney and Reeves (2000); and Steed and Tromp (1998).

Formative Evaluation

Formative evaluation (both formal and informal; Hix and Hartson 1993); isan observational, empirical evaluation method, applied during evolvingstages of design, that assesses user interaction by iteratively placing rep-resentative users in task-based scenarios in order to identify usabilityproblems, as well as to assess the design’s ability to support user explo-ration, learning, and task performance. Formative evaluations can rangefrom being rather informal, providing mostly qualitative results such ascritical incidents, user comments, and general reactions, to being veryformal and extensive, producing both qualitative and quantitative (tasktiming, errors, etc.) results.

Collected data are analyzed to identify UI components that both sup-port and detract from user task performance and user satisfaction. Alter-nating between formative evaluation and design or redesign efforts

11.2. Background 355

Page 35: 3D User Interfaces: Theory and Practice -

ultimately leads to an iteratively refined UI design. Most usability evalu-ations of 3D UIs fall into the formative evaluation category. The work ofHix and her colleagues (1999) provides a good example.

Summative Evaluation

Summative or comparative evaluation (both formal and informal; Hix andHartson 1993; Scriven 1967) is an evaluation and statistical comparison oftwo or more configurations of UI designs, UI components, and/or UItechniques. As with formative evaluation, representative users performtask scenarios as evaluators collect both qualitative and quantitativedata. As with formative evaluations, summative evaluations can be for-mally or informally applied.

Summative evaluation is generally performed after UI designs (orcomponents) are complete and as a traditional factorial experimental de-sign with multiple independent variables. Summative evaluation enablesevaluators to measure and subsequently compare the productivity andcost benefits associated with different UI designs. Comparing 3D UIs re-quires a consistent set of user task scenarios (borrowed and/or refinedfrom the formative evaluation effort), resulting in primarily quantitativeresults that compare (on a task-by-task basis) a design’s support for spe-cific user task performance.

Many of the formal experiments discussed in Part III of this book aresummative evaluations of 3D interaction techniques. For example, seeBowman, Johnson and Hodges (1999) and Poupyrev, Weghorst, and col-leagues (1997).

Questionnaires

A questionnaire (Hix and Hartson 1993) is a written set of questions usedto obtain information from users before or after they have participated ina usability evaluation session. Questionnaires are good for collecting de-mographic information (e.g., age, gender, computer experience) and sub-jective data (e.g., opinions, comments, preferences, ratings) and are oftenmore convenient and more consistent than spoken interviews.

In the context of 3D UIs, questionnaires are used quite frequently,especially to elicit information about subjective phenomena such as pres-ence (Witmer and Singer 1998) or simulator sickness/cybersickness(Kennedy et al. 1993).

356 Chapter 11 Evaluation of 3D User Interfaces

Page 36: 3D User Interfaces: Theory and Practice -

Interviews and Demos

The interview (Hix and Hartson 1993) is a technique for gathering infor-mation about users by talking directly to them. An interview can gathermore information than a questionnaire and may go to a deeper level ofdetail. Interviews are good for getting subjective reactions, opinions, andinsights into how people reason about issues. Structured interviews havea predefined set of questions and responses. Open-ended interviews per-mit the respondent (interviewee) to provide additional information, andthey permit the interviewer to ask broad questions without a fixed set ofanswers and explore paths of questioning that may occur to him sponta-neously during the interview. Demonstrations (typically of a prototype)may be used in conjunction with user interviews to aid a user in talkingabout the interface.

In 3D UI evaluation, the use of interviews has not been studied ex-plicitly, but informal interviews are often used at the end of formative orsummative usability evaluations (e.g., Bowman and Hodges 1997).

11.3. Evaluation Metrics for 3D Interfaces

Now we turn to metrics. That is, how do we measure the characteristicsof a 3D UI when evaluating it? We focus on the general metric of usability.A 3D UI is usable when the user can reach her goals; when the importanttasks can be done better, easier, or faster than with another system; andwhen users are not frustrated or uncomfortable. Note that all of thesehave to do with the user.

We discuss three types of metrics for 3D UIs: system performancemetrics, task performance metrics, and user preference metrics.

11.3.1. System Performance Metrics

System performance refers to typical computer or graphics system per-formance, using metrics such as average frame rate, average latency, net-work delay, and optical distortion. From the interface point of view,system performance metrics are really not important in and of them-selves. Rather, they are important only insofar as they affect the user’s ex-perience or tasks. For example, the frame rate probably needs to be atreal-time levels before a user will feel present. Also, in a collaborative set-ting, task performance will likely be negatively affected if there is toomuch network delay.

11.3. Evaluation Metrics for 3D Interfaces 357

Page 37: 3D User Interfaces: Theory and Practice -

11.3.2. Task Performance Metrics

User task performance refers to the quality of performance of specifictasks in the 3D application, such as the time to navigate to a specific loca-tion, the accuracy of object placement, or the number of errors a usermakes in selecting an object from a set. Task performance metrics mayalso be domain-specific. For example, evaluators may want to measurestudent learning in an educational application or spatial awareness in amilitary training VE.

Typically, speed (efficiency) and accuracy are the most important taskperformance metrics. The problem with measuring both speed and accu-racy is that there is an implicit relationship between them: I can go fasterbut be less accurate, or I can increase my accuracy by decreasing my speed. It isassumed that for every task, there is some curve representing thisspeed/accuracy tradeoff, and users must decide where on the curve theywant to be (even if they don’t do this consciously). In an evaluation,therefore, if you simply tell your subjects to do a task as quickly and pre-cisely as possible, they will probably end up all over the curve, givingyou data with a high level of variability. Therefore, it is very importantthat you instruct users in a very specific way if you want them to be atone end of the curve or the other. Another way to manage the tradeoff isto tell users to do the task as quickly as possible one time, as accurately aspossible the second time, and to balance speed and accuracy the thirdtime. This gives you information about the tradeoff curve for the particu-lar task you’re looking at.

11.3.3. User Preference Metrics

User preference refers to the subjective perception of the interface by theuser (perceived ease of use, ease of learning, satisfaction, etc.). These pref-erences are often measured via questionnaires or interviews and may beeither qualitative or quantitative. The user preference metrics generallycontribute significantly to overall usability. A usable application is onewhose interface does not pose any significant barriers to task completion.Often, HCI experts speak of a transparent interface—a UI that simply dis-appears until it feels to the user as if he is working directly on the prob-lem rather than indirectly through an interface. UIs should be intuitive,provide good affordances (indications of their use and how they are to beused), provide good feedback, not be obtrusive, and so on. An applica-tion cannot be effective unless users are willing to use it (and this is pre-cisely the problem with some more advanced VE applications—they

358 Chapter 11 Evaluation of 3D User Interfaces

Page 38: 3D User Interfaces: Theory and Practice -

provide functionality for the user to do a task, but a lack of attention touser preference keeps them from being used).

For 3D UIs in particular, presence and user comfort can be importantmetrics that are not usually considered in traditional UI evaluation. Pres-ence is a crucial, but not very well understood metric for VE systems. It isthe “feeling of being there”—existing in the virtual world rather than inthe physical world. How can we measure presence? One method simplyasks users to rate their feeling of being there on a 1 to 100 scale. Question-naires can also be used and can contain a wide variety of questions, alldesigned to get at different aspects of presence. Psychophysical measuresare used in controlled experiments where stimuli are manipulated andthen correlated to users’ ratings of presence (for example, how does therating change when the environment is presented in mono versus stereomodes?). There are also some more objective measures. Some are physio-logical (how the body responds to the VE). Others might look at users’ re-actions to events in the VE (e.g., does the user duck when he’s about to hita virtual beam?). Tests of memory for the environment and the objectswithin it might give an indirect measurement of the level of presence. Fi-nally, if we know a task for which presence is required, we can measureusers’ performance on that task and infer the level of presence. There isstill a great deal of debate about the definition of presence, the best waysto measure presence, and the importance of presence as a metric (e.g.,Usoh et al. 2000; Witmer and Singer 1998).

The other novel user preference metric for 3D systems is user com-fort. This includes several different things. The most notable and wellstudied is so-called simulator sickness (because it was first noted in flightsimulators). This is symptomatically similar to motion sickness and mayresult from mismatches in sensory information (e.g., your eyes tell yourbrain that you are moving, but your vestibular system tells your brainthat you are not moving). There is also work on the physical aftereffectsof being exposed to 3D systems. For example, if a VE misregisters the vir-tual hand and the real hand (they’re not at the same physical location),the user may have trouble doing precise manipulation in the real worldafter exposure to the virtual world. More seriously, activities like drivingor walking may be impaired after extremely long exposures (1 hour ormore). Finally, there are simple strains on arms/hands/eyes from the useof 3D devices. User comfort is also usually measured subjectively, usingrating scales or questionnaires. The most famous questionnaire is thesimulator sickness questionnaire (SSQ) developed by Kennedy and hiscolleagues (1993). Researchers have used some objective measures in the

11.3. Evaluation Metrics for 3D Interfaces 359

Page 39: 3D User Interfaces: Theory and Practice -

study of aftereffects—for example by measuring the accuracy of a manip-ulation task in the real world after exposure to a virtual world (Wann andMon-Williams 2002).

11.4. Distinctive Characteristics of 3D Interface Evaluation

The approaches we discuss below for usability evaluation of 3D UIs havebeen developed and used in response to perceived differences between theevaluation of 3D UIs and the evaluation of traditional UIs such as GUIs.Many of the fundamental concepts and goals are similar, but use of theseapproaches in the context of 3D UIs is distinct. Here, we present some ofthe issues that differentiate 3D UI usability evaluation, organized into sev-eral categories. The categories contain overlapping considerations but pro-vide a rough partitioning of these important issues. Note that many ofthese issues are not necessarily found in the literature, but instead comefrom personal experience and extensive discussions with colleagues.

11.4.1. Physical Environment Issues

One of the most obvious differences between 3D UIs and traditional UIsis the physical environment in which that interface is used. In many 3DUIs, nontraditional input and output devices are used, which can pre-clude the use of some types of evaluation. Users may be standing ratherthan sitting, and they may be moving about a large space, using whole-body movements. These properties give rise to several issues for usabil-ity evaluation. Following are some examples:

• In interfaces using non-see-through HMDs, the user cannot seethe surrounding physical world. Therefore, the evaluator mustensure that the user will not bump into walls or other physicalobjects, trip over cables, or move outside the range of the track-ing device (Viirre 1994). A related problem in surround-screenVEs (such as the CAVE) is that the physical walls can be difficultto see because of projected graphics. Problems of this sort couldcontaminate the results of a usability evaluation (e.g., if the usertrips while in the midst of a timed task) and more importantlycould cause injury to the user. To mitigate risk, the evaluator canensure that cables are bundled and will not get in the way of theuser (e.g., cables may descend from above). Also, the user may be

360 Chapter 11 Evaluation of 3D User Interfaces

Page 40: 3D User Interfaces: Theory and Practice -

placed in a physical enclosure that limits movement to areaswhere there are no physical objects to interfere.

• Many 3D displays do not allow multiple simultaneous viewers(e.g., user and evaluator), so equipment must be set up so thatan evaluator can see the same image as the user. With an HMD,for example, this can be done by splitting the video signal andsending it to both the HMD and a monitor. In a surround-screenor workbench VE, a monoscopic view of the scene could be ren-dered to a monitor, or, if performance will not be adversely af-fected, both the user and the evaluator can be tracked (this cancause other problems, however; see section 11.4.2 on evaluatorconsiderations). If images are viewed on a monitor, then it isdifficult to see both the actions of the user and the graphicalenvironment at the same time, meaning that multiple evaluatorsmay be necessary to observe and collect data during an evalua-tion session.

• A common and very effective technique for generating impor-tant qualitative data during usability evaluation sessions is the“think-aloud” protocol (as described in Hix and Hartson [1993]).With this technique, subjects talk about their actions, goals, andthoughts regarding the interface while they are performing spe-cific tasks. In some 3D UIs, however, voice recognition is used asan interaction technique, making the think-aloud protocol muchmore difficult and perhaps even impossible. Post-session inter-views may help to recover some of the information that wouldhave been obtained from the think-aloud protocol.

• Another common technique involves recording video of boththe user and the interface (as described in Hix and Hartson[1993]). Because 3D UI users are often mobile, a single, fixed cam-era may require a very wide shot, which may not allow preciseidentification of actions. This could be addressed by using atracking camera (with, unfortunately, additional expense andcomplexity) or a camera operator (additional personnel). More-over, views of the user and the graphical environment must besynchronized so that cause and effect can clearly be seen on thevideotape. Finally, recording video of a stereoscopic graphicsimage can be problematic.

• An ever-increasing number of proposed 3D applications areshared among two or more users (Stiles et al. 1996; Normand

11.4. Distinctive Characteristics of 3D Interface Evaluation 361

Page 41: 3D User Interfaces: Theory and Practice -

et al. 1999). These collaborative 3D UIs become even more diffi-cult to evaluate than single-user 3D UIs due to physical separa-tion of users (i.e., users are in more than one physical location),the additional information that must be recorded for each user,the unpredictability of network behavior as a factor influencingusability, the possibility that each user will have different de-vices, and the additional complexity of the system, which maycause more frequent crashes or other problems.

11.4.2. Evaluator Issues

A second set of issues relates to the role of the evaluator in a 3D UI us-ability evaluation. Because of the complexities and distinctive charac-teristics of 3D UIs, a usability study may require multiple evaluators,different evaluator roles and behaviors, or both. Following are someexamples:

• Many VEs attempt to produce a sense of presence in the user—that is, a feeling of actually being in the virtual world rather thanthe physical one. Evaluators can cause breaks in presence if theuser can sense them. In VEs using projected graphics, the userwill see an evaluator if the evaluator moves into the user’s fieldof view. This is especially likely in a CAVE environment (Cruz-Neira et al. 1993) where it is difficult to see the front of a user(e.g., their facial expressions and detailed use of handheld de-vices) without affecting that user’s sense of presence. This maybreak presence, because the evaluator is not part of the virtualworld. In any type of VE, touching or talking to the user cancause such breaks. If the evaluation is assessing presence, or ifpresence is hypothesized to affect performance on the task beingevaluated, then the evaluator must take care to remain unsensedduring the evaluation.

• When breaks in presence are deemed very important for a partic-ular VE, an evaluator may not wish to intervene at all during anevaluation session. This means that the experimental application/interface must be robust and bug-free so that the session does nothave to be interrupted to fix a problem. Also, instructions givento the user must be very detailed, explicit, and precise, and theevaluator should make sure the user has a complete understand-ing of the procedure and tasks before beginning the session.

362 Chapter 11 Evaluation of 3D User Interfaces

Page 42: 3D User Interfaces: Theory and Practice -

• 3D UI hardware and software are often more complex and lessrobust than traditional UI hardware and software. Again, multi-ple evaluators may be needed to do tasks such as helping theuser with display and input hardware, running the software thatproduces graphics and other output, recording data such as tim-ings and errors, and recording critical incidents and other quali-tative observations of a user’s actions.

• Traditional UIs typically require only a discrete, single stream ofinput (e.g., from mouse and keyboard), but many 3D UIs includemultimodal input, combining discrete events, gestures, voice,and/or whole-body motion. It is much more difficult for an eval-uator to process these multiple input streams simultaneouslyand record an accurate log of the user’s actions. These challengesmake multiple evaluators and video even more important.

11.4.3. User Issues

There are also a large number of issues related to the user populationused as subjects in 3D UI usability evaluations. In traditional evaluations,subjects are gleaned from the target user population of an application orfrom a similar representative group of people. Efforts are often made, forexample, to preserve gender equity, to have a good distribution of ages,and to test both experts and novices if these differences are representativeof the target user population. The nature of 3D UI evaluation, however,does not always allow for such straightforward selection of users. Fol-lowing are some examples:

• 3D UIs are still often a “solution looking for a problem.” Becauseof this, the target user population for a 3D application or interac-tion technique to be evaluated may not be known or well under-stood. For example, a study comparing two virtual traveltechniques is not aimed at a particular set of users. Thus, it may bedifficult to generalize performance results. The best course of ac-tion is to evaluate the most diverse user population possible interms of age, gender, technical ability, physical characteristics, andso on, and to include these factors in any models of performance.

• It may be impossible to differentiate between novice and expertusers because there are very few potential subjects who could beconsidered experts in 3D UIs. Most users who could be consideredexperts might be, for example, research staff, whose participation

11.4. Distinctive Characteristics of 3D Interface Evaluation 363

Page 43: 3D User Interfaces: Theory and Practice -

in an evaluation could confound the results. Also, because mostusers are typically novices, the evaluation itself may need to beframed at a lower cognitive and physical level. Evaluators canmake no assumptions about a novice user’s ability to understandor use a given interaction technique or device.

• Because 3D UIs will be novel to many potential subjects, the re-sults of an evaluation may exhibit high variability and differencesamong individuals. This means that the number of subjects neededto obtain a good picture of performance may be larger than fortraditional usability evaluations. If statistically significant resultsare required (depending on the type of usability evaluation beingperformed), the number of subjects may be even greater.

• Researchers are still studying a large design space for 3D inter-action techniques and devices. Because of this, evaluations oftencompare two or more techniques, devices, or combinations of thetwo. To perform such evaluations using a within-subjects design,users must be able to adapt to a wide variety of situations. If abetween-subjects design is used, a larger number of subjects willagain be needed.

• VE evaluations must consider the effects of cybersickness and fa-tigue on subjects. Although some of the causes of cybersicknessare known, there are still no predictive models for it (Kennedy etal. 2000), and little is known regarding acceptable exposure timeto VEs. For evaluations, then, a worst-case assumption must bemade. A lengthy experiment (anything over 30 minutes, for ex-ample, might be considered lengthy, depending on the specificVE) must contain planned rest breaks and contingency plans incase of ill or fatigued subjects. Shortening the experiment is oftennot an option, especially if statistically significant results areneeded.

• Because it is not known exactly what VE situations cause sick-ness or fatigue, most VE evaluations should include somemeasurement (e.g., subjective, questionnaire-based [Kennedyet al. 2000], or physiological) of these factors. A result indicatingthat an interaction technique was 50% faster than any otherevaluated technique would be severely misleading if that inter-action technique also made 30% of subjects sick! Thus, user com-fort measurements should be included in low-level VEevaluations.

364 Chapter 11 Evaluation of 3D User Interfaces

Page 44: 3D User Interfaces: Theory and Practice -

• Presence is another example of a measure often required in VEevaluations that has no analogue in traditional UI evaluation.VE evaluations must often take into account subjective reportsof perceived presence, perceived fidelity of the virtual world, andso on. Questionnaires (Usoh et al. 2000; Witmer and Singer 1998)have been developed that purportedly obtain reliable and con-sistent measurements of such factors.

11.4.4. Evaluation Type Issues

Traditional usability evaluation can take many forms. These include in-formal user studies, formal experiments, task-based usability studies,heuristic evaluations, and the use of predictive models of performance(see section 11.3 for further discussion of these types of evaluations).There are several issues related to the use of various types of usabilityevaluation in 3D UIs. Following are some examples:

• Evaluations based solely on heuristics (i.e., design guidelines),performed by usability experts, are very difficult in 3D UIs be-cause of a lack of published, verified guidelines for 3D UI design.There are some notable exceptions (Bowman 2002; Conkar et al.1999; Gabbard 1997; Kaur 1999; Kaur et al. 1999; Mills and Noyes1999; Stanney and Reeves 2000), but for the most part, it is diffi-cult to predict the usability of a 3D interface without studyingreal users attempting representative tasks in the 3D UI. It is notlikely that a large number of heuristics will appear, at least notuntil 3D input and output devices become more standardized.Even assuming standardized devices, however, the design spacefor 3D interaction techniques and interfaces is very large, makingit difficult to produce effective and general heuristics to use asthe basis for evaluation.

• Another major type of usability evaluation that does not employusers is the application of performance models (e.g., GOMS,Fitts’s law). Very few models of this type have been developedfor or adapted to 3D UIs. However, the lower cost of both heuris-tic evaluation and performance model application makes themattractive for evaluation.

• Because of the complexity and novelty of 3D UIs, the applicabil-ity or utility of automated, tool-based evaluation may be greaterthan it is for more traditional UIs. For example, several issues

11.4. Distinctive Characteristics of 3D Interface Evaluation 365

Page 45: 3D User Interfaces: Theory and Practice -

above have noted the need for more than one evaluator in a 3DUI usability evaluation session. Automated usability evaluationscould reduce the need for several evaluators in a single session.There are at least two possibilities for automated usability evalu-ation of 3D UIs: first, to automatically collect and/or analyzedata generated by one or more users in a 3D UI, and second, toperform an analysis of an interface design using an interactivetool that embodies design guidelines (similar to heuristics). Somework has been done on automatic collection and analysis of datausing specific types of repeating patterns in users’ data as indi-cators of potential usability problems (e.g., Siochi and Hix 1991).However this work was performed on a typical GUI, and thereappears to be no research yet conducted that studies automateddata collection and evaluation of users’ data in 3D UIs. Thus, dif-ferences in the kinds of data for 3D UI usability evaluation havenot been explored, but they would involve, at a minimum, collat-ing data from multiple users in a single session, possibly at dif-ferent physical locations and even in different parts of the 3Denvironment. At least one tool, MAUVE (Multi-Attribute Usabil-ity evaluation tool for Virtual Environments) incorporates designguidelines organized around several VE categories: navigation,object manipulation, input, output (e.g., visual, auditory, haptic),and so on (Stanney et al. 2000). Within each of these categories,MAUVE presents a series of questions to an evaluator, who usesthe tool to perform a multicriteria, heuristic-style evaluation of aspecific 3D UI.

• When performing statistical experiments to quantify and com-pare the usability of various 3D interaction techniques, inputdevices, interface elements, and so on, it is often difficult toknow which factors have a potential impact on the results. Be-sides the primary independent variable (e.g., a specific inter-action technique), a large number of other potential factors couldbe included, such as environment, task, system, or user charac-teristics. One approach is to try to vary as many of these poten-tially important factors as possible during a single experiment.This “testbed evaluation” approach (Bowman, Johnson et al.1999; Snow and Williges 1998) has been used with some success(see section 11.6.1). The other extreme would be to simply holdas many of these other factors as possible constant and evaluateonly in a particular set of circumstances. Thus, statistical 3D UI

366 Chapter 11 Evaluation of 3D User Interfaces

Page 46: 3D User Interfaces: Theory and Practice -

experimental evaluations may be either overly simplistic oroverly complex—finding the proper balance is difficult.

11.4.5. Miscellaneous Issues

• 3D UI usability evaluations generally focus at a lower level thantraditional UI evaluations. In the context of GUIs, a standardlook and feel and a standard set of interface elements and inter-action techniques exist, so evaluation usually looks at subtle in-terface nuances or overall interface metaphors. In 3D UIs,however, there are no interface standards, and there is not even agood understanding of the usability of various interface types.Therefore, 3D UI evaluations most often compare lower-levelcomponents, such as interaction techniques or input devices.

• It is tempting to overgeneralize the results of evaluations of 3Dinteraction performed in a generic (nonapplication) context.However, because of the fast-changing and complex nature of 3DUIs, one cannot assume anything (display type, input devices,graphics processing power, tracker accuracy, etc.) about the char-acteristics of a real 3D application. Everything has the potentialto change. Therefore, it is important to include information aboutthe environment in which the evaluation was performed and toevaluate in a range of environments (e.g., using different devices)if possible.

11.5. Classification of 3D Evaluation Methods

A classification space for 3D UI usability evaluation methods can providea structured means for comparing evaluation methods. One such spaceclassifies methods according to three key characteristics: involvement ofrepresentative users, context of evaluation, and types of results produced (Fig-ure 11.2).

The first characteristic discriminates between those methods that re-quire the participation of representative users (to provide design or use-based experiences and options) and those methods that do not (methodsnot requiring users still require a usability expert). The second character-istic describes the type of context in which the evaluation takes place. Inparticular, this characteristic identifies those methods that are applied ina generic context and those that are applied in an application-specific

11.5. Classification of 3D Evaluation Methods 367

Page 47: 3D User Interfaces: Theory and Practice -

context. The context of evaluation inherently imposes restrictions on theapplicability and generality of results. Thus, conclusions or results ofevaluations conducted in a generic context can typically be applied morebroadly (i.e., to more types of interfaces) than results of an application-specific evaluation method, which may be best suited for applicationsthat are similar in nature. The third characteristic identifies whether ornot a given usability evaluation method produces (primarily) qualitativeor quantitative results.

Note that the characteristics described above are not designed to bemutually exclusive, and are instead designed to convey one (of many) us-ability evaluation method characteristics. For example, a particular us-ability evaluation method may produce both quantitative and qualitativeresults. Indeed, many of the identified methods are flexible enough toprovide insight at many levels. These three characteristics were chosen

368 Chapter 11 Evaluation of 3D User Interfaces

User Involvement

Requires Users

Does Not Require Users

• Formal Summative Evaluation

• Post-hoc questionnaire

• Information Summative Evaluation

• Post-hoc questionnaire

• Formative Evaluation• Formal Summative

Evaluation• Post-hoc questionnaire

• Formative Evaluation(informal and formal)

• Post-hoc questionnaire• Interview/Demo

• (generic performance models for VEs [e.g., Fitts’s law])

• Heuristic Evaluation

• (application-specific performance models for VEs [e.g., GOMS])

• Heuristic Evaluation• Cognitive

Walkthrough

Generic

ApplicationSpecific

Con

text

of E

valu

atio

n

Type of Results

Quantitative

Qualitative

Quantitative

Qualitative

Figure 11.2 A classification of usability evaluation methods for 3D UIs. (Imagereprinted by permission of MIT Press and Presence: Teleoperators and Virtual Environ-ments)

Page 48: 3D User Interfaces: Theory and Practice -

(over other potential characteristics) because they are often the most sig-nificant (to evaluators) because of their overall effect on the usabilityprocess. That is, a researcher interested in undertaking usability evalua-tion will likely need to know what the evaluation will cost, what the im-pact of the evaluation will be, and how the results can be applied. Each ofthe three characteristics addresses these concerns: degree of user involve-ment directly affects the cost to proctor and analyze the evaluation; re-sults of the process indicate what type of information will be produced(for the given cost); and context of evaluation inherently dictates to whatextent results may be applied.

This classification is useful on several levels. It structures the space ofevaluation methods and provides a practical vocabulary for discussion ofmethods in the research community. It also allows researchers to comparetwo or more methods and understand how they are similar or differenton a fundamental level. Finally, it reveals “holes” in the space (Card et al.1990)—combinations of the three characteristics that have rarely or neverbeen tried in the 3D UI community.

Figure 11.2 shows that there are two such holes in this space (theshaded boxes). More specifically, there is a lack of current 3D UI usabilityevaluation methods that do not require users and that can be applied in ageneric context to produce quantitative results (upper right of the figure).Note that some possible existing 2D and GUI evaluation methods arelisted in parentheses, but few, if any, of these methods have been appliedto 3D UIs. Similarly, there appears to be no method that provides quanti-tative results in an application-specific setting that does not require users(third box down on the right of the figure). These areas may be interestingavenues for further research.

11.6. Two Multimethod Approaches

A shortcoming of the classification discussed in section 11.5 is that it doesnot convey “when” in the software development lifecycle a method isbest applied or “how” several methods may be applied. In most cases,answers to these questions cannot be determined without a comprehen-sive understanding of each of the methods presented, as well as the spe-cific goals and circumstances of the 3D UI research or development effort.In this section, we present two well-developed 3D UI evaluation ap-proaches and compare them in terms of practical usage and results.

11.6. Two Multimethod Approaches 369

Page 49: 3D User Interfaces: Theory and Practice -

11.6.1. Testbed Evaluation Approach

Bowman and Hodges (1999) take the approach of empirically evaluatinginteraction techniques outside the context of applications (i.e., within ageneric context rather than within a specific application) and add thesupport of a framework for design and evaluation, which we summarizehere. Principled, systematic design and evaluation frameworks give for-malism and structure to research on interaction; they do not rely solely onexperience and intuition. Formal frameworks provide us not only with agreater understanding of the advantages and disadvantages of currenttechniques, but also with better opportunities to create robust and well-performing new techniques based on knowledge gained through evalu-ation. Therefore, this approach follows several important evaluationconcepts, elucidated in the following sections. Figure 11.3 presents anoverview of this approach.

Initial Evaluation

The first step toward formalizing the design, evaluation, and applicationof interaction techniques is to gain an intuitive understanding of the ge-neric interaction tasks in which one is interested and current techniquesavailable for the tasks (see Figure 11.3, area labeled 1). This is accom-plished through experience using interaction techniques and through ob-servation and evaluation of groups of users. These initial evaluationexperiences are heavily drawn upon for the processes of building a tax-onomy, listing outside influences on performance, and listing perfor-mance measures. It is helpful, therefore, to gain as much experience ofthis type as possible so that good decisions can be made in the nextphases of formalization.

Taxonomy

The next step is to establish a taxonomy (Figure 11.3, area 2) of interactiontechniques for the interaction task being evaluated. These are technique-decomposition taxonomies, as described in section 11.2.1. For example,the task of changing an object’s color might be made up of three subtasks:selecting an object, choosing a color, and applying the color. The subtaskfor choosing a color might have two possible technique components:changing the values of R, G, and B sliders or touching a point within a 3Dcolor space. The subtasks and their related technique components makeup a taxonomy for the object coloring task.

370 Chapter 11 Evaluation of 3D User Interfaces

Page 50: 3D User Interfaces: Theory and Practice -

Ideally, the taxonomies established by this approach need to be cor-rect, complete, and general. Any interaction technique that can be con-ceived for the task should fit within the taxonomy. Thus, subtasks willnecessarily be abstract. The taxonomy will also list several possible tech-nique components for each of the subtasks, but they do not list every con-ceivable component.

Building taxonomies is a good way to understand the low-levelmakeup of interaction techniques and to formalize differences betweenthem, but once they are in place, they can also be used in the design pro-cess. One can think of a taxonomy not only as a characterization, butalso as a design space. Because a taxonomy breaks the task down into

11.6. Two Multimethod Approaches 371

QuantitativePerformance

Results

Heuristics&

Guidelines

User-Centered Application

TestbedEvaluation

TaxonomyOutside Factors

task, users,environment, system

Initial Evaluation

PerformanceMetrics

1

2 3 4

5

67

8

Figure 11.3 Testbed evaluation approach. (Image reprinted by permission of MITPress and Presence: Teleoperators and Virtual Environments)

Page 51: 3D User Interfaces: Theory and Practice -

separable subtasks, a wide range of designs can be considered quickly,simply by trying different combinations of technique components for eachof the subtasks. There is no guarantee that a given combination will makesense as a complete interaction technique, but the systematic nature ofthe taxonomy makes it easy to generate designs and to reject inappropri-ate combinations.

Outside Factors

Interaction techniques cannot be evaluated in a vacuum. A user’s perfor-mance on an interaction task may depend on a variety of factors (Figure11.3, area 3), of which the interaction technique is but one. In order for theevaluation framework to be complete, such factors must be included ex-plicitly and used as secondary independent variables in evaluations.Bowman and Hodges (1999) identified four categories of outside factors.

First, task characteristics are those attributes of the task that may affectuser performance, including distance to be traveled or size of the objectbeing manipulated. Second, the approach considers environment charac-teristics, such as the number of obstacles and the level of activity or mo-tion in the 3D scene. User characteristics, including cognitive measuressuch as spatial ability and physical attributes such as arm length, may alsocontribute to user performance. Finally, system characteristics, such as thelighting model used or the mean frame rate, may be significant.

Performance Metrics

This approach is designed to obtain information about human perfor-mance in common 3D interaction tasks—but what is performance? Speedand accuracy are easy to measure, are quantitative, and are clearly impor-tant in the evaluation of interaction techniques, but there are also manyother performance metrics (Figure 11.3, area 4) to be considered. Thus,this approach also considers more subjective performance values, such asperceived ease of use, ease of learning, and user comfort. The choice of in-teraction technique could conceivably affect all of these, and they shouldnot be discounted. Also, more than any other current computing para-digm, 3D UIs involve the user’s senses and body in the task. Thus, a focuson user-centric performance measures is essential. If an interaction tech-nique does not make good use of human skills, or if it causes fatigue ordiscomfort, it will not provide overall usability, despite its performancein other areas.

372 Chapter 11 Evaluation of 3D User Interfaces

Page 52: 3D User Interfaces: Theory and Practice -

Testbed Evaluation

Bowman and Hodges (1999) use testbed evaluation (Figure 11.3, area 5)as the final stage in the evaluation of interaction techniques for 3D inter-action tasks. This approach allows generic, generalizable, and reusableevaluation through the creation of testbeds—environments and tasksthat involve all important aspects of a task, that evaluate each componentof a technique, that consider outside influences (factors other than theinteraction technique) on performance, and that have multiple perfor-mance measures. A testbed experiment uses a formal, factorial, experi-mental design and normally requires a large number of subjects. If manyinteraction techniques or outside factors are included in the evaluation,the number of trials per subject can become overly large, so interactiontechniques are usually a between-subjects variable (each subject usesonly a single interaction technique), while other factors are within-subjects variables. See the case studies below for examples of testbedexperiments.

Application and Generalization of Results

Testbed evaluation produces a set of results or models (Figure 11.3, area6) that characterize the usability of an interaction technique for the speci-fied task. Usability is given in terms of multiple performance metricswith respect to various levels of outside factors. These results becomepart of a performance database for the interaction task, with more infor-mation being added to the database each time a new technique is runthrough the testbed. These results can also be generalized into heuristicsor guidelines (Figure 11.3, area 7) that can easily be evaluated and ap-plied by 3D UI developers.

The last step is to apply the performance results to 3D applica-tions (Figure 11.3, area 8) with the goal of making them more useful andusable. In order to choose interaction techniques for applications appro-priately, one must understand the interaction requirements of the appli-cation. There is no single “best” technique, because the technique that isbest for one application may not be optimal for another application withdifferent requirements. Therefore, applications need to specify their inter-action requirements before the most appropriate interaction techniquescan be chosen. This specification is done in terms of the performance met-rics that have already been defined as part of the formal framework.Once the requirements are in place, the performance results from testbed

11.6. Two Multimethod Approaches 373

Page 53: 3D User Interfaces: Theory and Practice -

evaluation can be used to recommend interaction techniques that meetthose requirements.

Case Studies

Although testbed evaluation could be applied to almost any type of inter-active system, it is especially appropriate for 3D UIs because of its focuson low-level interaction techniques. Testbed experiments have been per-formed comparing techniques for the tasks of travel (Bowman, Daviset al. 1999) and selection/manipulation (Bowman and Hodges 1999).

The travel testbed experiment compared seven different travel tech-niques for the tasks of naïve search and primed search. In the primedsearch trials, the initial visibility of the target and the required accuracy ofmovement were also varied. The dependent variables were time for taskcompletion and subjective user comfort ratings. Forty-four subjects par-ticipated in the experiment. The researchers gathered both demographicand spatial ability information for each subject.

The selection/manipulation testbed compared the usability and per-formance of nine different interaction techniques. For selection tasks, theindependent variables were distance from the user to the object, size ofthe object, and density of distracter objects. For manipulation tasks, therequired accuracy of placement, the required degrees of freedom, andthe distance through which the object was moved were varied. The de-pendent variables in this experiment were the time for task comple-tion, the number of selection errors, and subjective user comfort ratings.Forty-eight subjects participated, and the researchers again obtained de-mographic data and spatial ability scores.

In both instances, the testbed approach produced unexpected andinteresting results that would not have been revealed by a simpler exper-iment. For example, in the selection/manipulation testbed, it was foundthat selection techniques using an extended virtual hand performed wellwith larger, nearer objects and more poorly with smaller, farther objects,while selection techniques based on ray-casting performed well regard-less of object size or distance. The testbed environments and tasks havealso proved to be reusable. The travel testbed was used to evaluate a newtravel technique and compare it to existing techniques, while the manip-ulation testbed has been used to evaluate the usability of common tech-niques in the context of different VE display devices.

374 Chapter 11 Evaluation of 3D User Interfaces

Page 54: 3D User Interfaces: Theory and Practice -

11.6.2. Sequential Evaluation Approach

Gabbard, Hix, and Swan (1999) present a sequential approach to usabilityevaluation for specific 3D applications. The sequential evaluation ap-proach is a usability engineering approach and addresses both designand evaluation of 3D UIs. However, for the scope of this chapter, wefocus on different types of evaluation and address analysis, design, andprototyping only when they have a direct effect on evaluation.

Although some of its components are well suited for evaluation ofgeneric interaction techniques, the complete sequential evaluation ap-proach employs application-specific guidelines, domain-specific repre-sentative users, and application-specific user tasks to produce a usableand useful interface for a particular application. In many cases, results orlessons learned may be applied to other, similar applications (for ex-ample, 3D applications with similar display or input devices, or withsimilar types of tasks). In other cases (albeit less often), it is possible to ab-stract the results for general use.

Sequential evaluation evolved from iteratively adapting and enhanc-ing existing 2D and GUI usability evaluation methods. In particular, itmodifies and extends specific methods to account for complex interactiontechniques, nonstandard and dynamic UI components, and multimodaltasks inherent in 3D UIs. Moreover, the adapted/extended methods bothstreamline the usability engineering process and provide sufficient cover-age of the usability space. Although the name implies that the variousmethods are applied in sequence, there is considerable opportunity toiterate both within a particular method as well as among methods. It is im-portant to note that all the pieces of this approach have been used foryears in GUI usability evaluations. The unique contribution of Gabbard,Hix, and Swan’s (1999) work is the breadth and depth offered by progres-sive use of these techniques, adapted when necessary for 3D UI evalua-tion, in an application-specific context. Further, the way in which eachstep in the progression informs the next step is an important finding: theordering of the methods guides developers toward a usable application.

Figure 11.4 presents the sequential evaluation approach. It allows de-velopers to improve a 3D UI by a combination of expert-based and user-based techniques. This approach is based on sequentially performinguser task analysis (see Figure 11.4, area labeled 1), heuristic (or guideline-based expert) evaluation (Figure 11.4, area 2), formative evaluation (Fig-ure 11.4, area 3), and summative evaluation (Figure 11.4, area 4), withiteration as appropriate within and among each type of evaluation. This

11.6. Two Multimethod Approaches 375

Page 55: 3D User Interfaces: Theory and Practice -

approach leverages the results of each individual method by systemati-cally defining and refining the 3D UI in a cost-effective progression.

Depending upon the nature of the application, this sequential evalua-tion approach may be applied in a strictly serial approach (as Figure 11.4’ssolid black arrows illustrate) or iteratively applied (either as a whole orper-individual method, as Figure 11.4’s gray arrows illustrate) manytimes. For example, when used to evaluate a complex command-and-con-trol battlefield visualization application (Hix et al. 1999), user task analysiswas followed by significant iterative use of heuristic and formative evalu-ation and lastly followed by a single, broad summative evaluation.

From experience, this sequential evaluation approach provides cost-effective assessment and refinement of usability for a specific 3D applica-

376 Chapter 11 Evaluation of 3D User Interfaces

User-Centered Application

(4)SummativeEvaluation

(3)FormativeEvaluation

(2)Heuristic

Evaluation

(1)User TaskAnalysis(1)

User TaskAnalysis

(A)Task

DescriptionsSequences &Dependencies

(1)User TaskAnalysis

(C)Streamlined

User InterfaceDesigns

(1)User TaskAnalysis

(B)Guidelines

andHeuristics

(1)User TaskAnalysis

(D)Representative

UserTask

Scenarios

(1)User TaskAnalysis

(E)IterativelyRefined

User InterfaceDesigns

Figure 11.4 Sequential evaluation approach. (Image reprinted by permission of MITPress and Presence: Teleoperators and Virtual Environments)

Page 56: 3D User Interfaces: Theory and Practice -

tion. Obviously, the exact cost and benefit of a particular evaluation effortdepends largely on the application’s complexity and maturity. In somecases, cost can be managed by performing quick and lightweight forma-tive evaluations (which involve users and thus are typically the mosttime-consuming to plan and perform). Moreover, by using a “hallwaymethodology,” user-based methods can be performed quickly and cost-effectively by simply finding volunteers from within one’s own organiza-tion. This approach should be used only as a last resort or in cases wherethe representative user class includes just about anyone. When used, careshould be taken to ensure that “hallway” users provide a close represen-tative match to the application’s ultimate end users.

The individual methods involved in sequential evaluation are de-scribed earlier in the chapter (user task analysis in section 11.2.1 andheuristic, formative, and summative evaluation in section 11.2.2).

Case Studies

The sequential evaluation approach has been applied to several 3D UIs,including the Naval Research Lab’s Dragon application: a VE for battle-field visualization (Gabbard et al. 1999). Dragon is presented on a Re-sponsive Workbench that provides a 3D display for observing andmanaging battlespace information shared among commanders and otherbattle planners. The researchers performed several evaluations over anine-month period, using one to three users and two to three evaluatorsper session. Each evaluation session revealed a set of usability problemsand generated a corresponding set of recommendations. The developerswould address the recommendations and produce an improved UI forthe next iteration of evaluation. The researchers performed four major cy-cles of iteration during the evaluation of Dragon, each cycle using theprogression of usability methods described in this section.

During the expert guideline-based evaluations, various user interac-tion design experts worked alone or collectively to assess the evolvinguser interaction design for Dragon. The expert evaluations uncovered sev-eral major design problems that are described in detail in Hix et al. (1999).Based on user task analysis and early expert guideline-based evalua-tions, the researchers created a set of user task scenarios specificallyfor battlefield visualization. During each formative session, there were atleast two and often three evaluators present. Although both the expertguideline-based evaluation sessions and the formative evaluation ses-sions were personnel-intensive (with two or three evaluators involved), it

11.6. Two Multimethod Approaches 377

Page 57: 3D User Interfaces: Theory and Practice -

was found that the quality and amount of data collected by multiple eval-uators greatly outweighed the cost of those evaluators.

Finally, the summative evaluation statistically examined the effect offour factors: locomotion metaphor (egocentric versus exocentric), gesturecontrol (controls rate versus controls position), visual presentation device(workbench, desktop, CAVE), and stereopsis (present versus not pres-ent). The results of these efforts are described in Hix and Gabbard (2002).This experience with sequential evaluation demonstrated its utility andeffectiveness.

11.6.3. Comparison of Approaches

The two major evaluation methods we have presented for 3D UIs—testbedevaluation and sequential evaluation—take quite different approaches tothe same problem: how to improve usability in 3D applications. At a highlevel, these approaches can be characterized in the space defined in sec-tion 11.5. Sequential evaluation is done in the context of a particular appli-cation and can have both quantitative and qualitative results. Testbedevaluation is done in a generic evaluation context and usually seeks quan-titative results. Both approaches employ users in evaluation.

In this section, we take a more detailed look at the similarities of anddifferences between these two approaches. We organize this comparisonby answering several key questions about each of the methods. Many ofthese questions can be asked of other evaluation methods and perhapsshould be asked prior to designing a usability evaluation. Indeed, an-swers to these questions may help identify appropriate evaluation meth-ods given specific research, design, or development goals. Developersshould attempt to find valid answers to these and related questions re-garding different usability evaluation methods. Another possibility is tounderstand the general properties, strengths, and weaknesses of each ap-proach so that the two approaches can be linked in complementary ways.

What Are the Goals of the Approach?

As mentioned above, both approaches ultimately aim to improve usabil-ity in 3D applications. However, there are more specific goals that exhibitdifferences between the two approaches.

Testbed evaluation has the specific goal of finding generic perfor-mance characteristics of interaction techniques. This means that onewants to understand interaction technique performance in a high-level,abstract way, not in the context of a particular application. This goal is im-

378 Chapter 11 Evaluation of 3D User Interfaces

Page 58: 3D User Interfaces: Theory and Practice -

portant because, if achieved, it can lead to wide applicability of the re-sults. In order to do generic evaluation, the testbed approach is limited togeneral techniques for common, universal tasks (such as navigation, se-lection, or manipulation). To say this in another way, testbed evaluation isnot designed to evaluate special-purpose techniques for specific tasks,such as applying a texture. Rather, it abstracts away from these specifics,using generic properties of the task, user, environment, and system.

Sequential evaluation’s immediate goal is to iterate toward a better UIfor a particular application, in this case a specific 3D application. It looksvery closely at particular user tasks of an application to determine whichscenarios and interaction techniques should be incorporated. In general,this approach tends to be quite specific in order to produce the best pos-sible interface design for a particular application under development.

When Should the Approach Be Used?

By its non-application-specific nature, the testbed approach actually fallscompletely outside the design cycle of a particular application. Ideally,testbed evaluation should be completed before an application is even aglimmer in the eye of a developer. Because it produces general perfor-mance/usability results for interaction techniques, these results can beused as a starting point for the design of new 3D UIs.

On the other hand, sequential evaluation should be used early andcontinually throughout the design cycle of a 3D application. User taskanalysis is necessary before the first interface prototypes are built. Heu-ristic and formative evaluations of a prototype produce recommenda-tions that can be applied to subsequent design iterations. Summativeevaluations of different design possibilities can be done when the choiceof design (e.g., for interaction techniques) is not clear.

The distinct time periods in which testbed evaluation and sequentialevaluation are employed suggests that combining the two approaches ispossible and even desirable. Testbed evaluation can first produce a set ofgeneral results and guidelines that can serve as an advanced and well-informed starting point for a 3D application’s UI design. Sequential eval-uation can then refine that initial design in a more application-specificfashion.

In What Situations Is the Approach Useful?

Testbed evaluation allows the researcher to understand detailed perfor-mance characteristics of common interaction techniques, especially user

11.6. Two Multimethod Approaches 379

Page 59: 3D User Interfaces: Theory and Practice -

performance. It provides a wide range of performance data that may beapplicable to a variety of situations. In a development effort that requiresa suite of applications with common interaction techniques and interfaceelements, testbed evaluation could provide a quantitative basis for choos-ing them, because developers could choose interaction techniques thatperformed well across the range of tasks, environments, and users in theapplications; their choices are supported by empirical evidence.

As we have said, the sequential evaluation approach should be usedthroughout the design cycle of a 3D UI, but it is especially useful in theearly stages of interface design. Because sequential evaluation producesresults even on very low-fidelity prototypes or design specifications, a 3Dapplication’s UI can be refined much earlier, resulting in greater cost sav-ings. Also, the earlier this approach is used in development, the moretime remains for producing design iterations, which ultimately results ina better product. This approach also makes the most sense when a usertask analysis has been performed. This analysis will suggest task scenar-ios that make evaluation more meaningful and effective.

What Are the Costs of Using the Approach?

The testbed evaluation approach can be seen as very costly and is defi-nitely not appropriate for every situation. In certain scenarios, however,its benefits can make the extra effort worthwhile. Some of the most im-portant costs associated with testbed evaluation include difficult experi-mental design (many independent and dependent variables, where someof the combinations of variables are not testable), experiments requiringlarge numbers of trials to ensure significant results, and large amounts oftime spent running experiments because of the number of subjects andtrials. Once an experiment has been conducted, the results may not be asdetailed as some developers would like. Because testbed evaluationlooks at generic situations, information on specific interface details suchas labeling, the shape of icons, and so on will not usually be available.

In general, the sequential evaluation approach may be less costlythan testbed evaluation because it can focus on a particular 3D applica-tion rather than pay the cost of abstraction. However, some importantcosts are still associated with this method. Multiple evaluators may beneeded. Development of useful task scenarios may take a large amountof effort. Conducting the evaluations themselves may be costly in termsof time, depending on the complexity of task scenarios. Most impor-tantly, because this is part of an iterative design effort, time spent by de-

380 Chapter 11 Evaluation of 3D User Interfaces

Page 60: 3D User Interfaces: Theory and Practice -

velopers to incorporate suggested design changes after each round ofevaluation must be considered.

What Are the Benefits of Using the Approach?

Because testbed evaluation is so costly, its benefits must be significant be-fore it becomes a useful evaluation method. One such benefit is general-ity of the results. Because testbed experiments are conducted in ageneralized context, the results may be applied many times in many dif-ferent types of applications. Of course, there is a cost associated with eachuse of the results because the developer must decide which results arerelevant to a specific 3D UI. Second, testbeds for a particular task may beused multiple times. When a new interaction technique is proposed, thattechnique can be run through the testbed and compared with techniquesalready evaluated. The same set of subjects is not necessary, because test-bed evaluation usually uses a between-subjects design. Finally, the gener-ality of the experiments lends itself to development of general guidelinesand heuristics. It is more difficult to generalize from experience with asingle application.

For a particular application, the sequential evaluation approach canbe very beneficial. Although it does not produce reusable results or gen-eral principles in the same broad sense as testbed evaluation, it is likely toproduce a more refined and usable 3D UI than if the results of testbedevaluation were applied alone. Another of the major benefits of thismethod relates to its involvement of users in the development process.Because members of the representative user group take part in many ofthe evaluations, the 3D UI is more likely to be tailored to their needs andshould result in higher user acceptance and productivity, reduced usererrors, and increased user satisfaction. There may be some reuse of re-sults, because other applications may have similar tasks or requirements,or they may be able to use refined interaction techniques produced by theprocess.

How Are the Approach’s Evaluation Results Applied?

The results of testbed evaluation are applicable to any 3D UI that uses thetasks studied with a testbed. Currently, testbed results are available forsome of the most common tasks in 3D UIs: travel and selection/manipu-lation (Bowman, Johnson et al. 2001). The results can be applied in twoways. The first, informal technique is to use the guidelines produced bytestbed evaluation in choosing interaction techniques for an application

11.6. Two Multimethod Approaches 381

Page 61: 3D User Interfaces: Theory and Practice -

(as in Bowman, Johnson et al. 1999). A more formal technique uses therequirements of the application (specified in terms of the testbed’s perfor-mance metrics) to choose the interaction technique closest to those re-quirements. Both of these approaches should produce a set of interactiontechniques for the application that makes it more usable than the sameapplication designed using intuition alone. However, because the resultsare so general, the 3D UI will almost certainly require further refinement.

Application of results of the sequential evaluation approach is muchmore straightforward. Heuristic and formative evaluations produce spe-cific suggestions for changes to the application’s UI or interaction tech-niques. The result of summative evaluation is an interface or set ofinteraction techniques that performs the best or is the most usable in acomparative study. In any case, results of the evaluation are tied directlyto changes in the interface of the 3D application.

11.7. Guidelines for 3D Interface Evaluation

In this section, we present some guidelines for those wishing to performusability evaluations of 3D UIs. The first subsection presents generalguidelines, and the second subsection focuses specifically on formalexperimentation.

11.7.1. General Guidelines

Informal evaluation is very important, both in the process of developingan application and in doing basic interaction research. In the context of anapplication, informal evaluation can quickly narrow the design spaceand point out major flaws in the design. In basic research, informal evalu-ation helps you understand the task and the techniques on an intuitivelevel before moving on to more formal classifications and experiments.

Acknowledge and plan for the differences betweentraditional UI and 3D UI evaluation.

Begin with informal evaluation.

382 Chapter 11 Evaluation of 3D User Interfaces

Page 62: 3D User Interfaces: Theory and Practice -

Section 11.4 detailed a large number of distinctive characteristics of3D UI evaluation. These differences must be considered when designinga study. For example, you should plan to have multiple evaluators, incor-porate rest breaks into your procedure, and assess whether breaks inpresence could affect your results.

Just as we discussed with respect to interaction techniques, there is nooptimal usability evaluation method or approach. A range of methodsshould be considered, and important questions such as those in section11.6.3 should be asked. For example, if you have designed a new interac-tion technique and want to refine the usability of the design before anyimplementation, a heuristic evaluation or cognitive walkthrough fits thebill. On the other hand, if you must choose between two input devices fora task in which a small difference in efficiency may be significant, a for-mal experiment may be required.

Remember that speed and accuracy alone do not equal usability. Alsoremember to look at learning, comfort, presence, and other metrics inorder to get a complete picture of the usability of the interface.

11.7.2. Guidelines for Formal Experimentation

If you’re going to do formal experiments, you will be investing a largeamount of time and effort, so you want the results to be as general as pos-sible. Thus, you have to think hard about how to design tasks that aregeneric, performance measures to which real applications can relate, anda method for applications to easily reuse the results.

Design experiments with general applicability.

Use a wide range of metrics.

Choose an evaluation approach that meets yourrequirements.

11.7. Guidelines for 3D Interface Evaluation 383

Page 63: 3D User Interfaces: Theory and Practice -

In doing formal experiments, especially testbed evaluations, youoften have too many variables to actually test without an infinite supplyof time and subjects. Small pilot studies can show trends that may allowyou to remove certain variables because they do not appear to affect thetask you’re doing.

In most formal experiments on the usability of 3D UIs, the most inter-esting results have been interactions. That is, it’s rarely the case that tech-nique A is always better than technique B. Rather, technique A works wellwhen the environment has characteristic X, and technique B works wellwhen the environment has characteristic Y. Statistical analysis should re-veal these interactions between variables.

Recommended ReadingMany entry-level HCI textbooks, such as the following, provide an excellent in-troduction to usability evaluation and usability engineering:

Hix, D., and H. Hartson (1993). Developing User Interfaces: Ensuring UsabilityThrough Product & Process, John Wiley & Sons.

Rosson, M., and J. Carroll (2001). Usability Engineering: Scenario-Based Devel-opment of Human Computer Interaction, Morgan Kaufmann Publishers.

AcknowledgmentMuch of the content in this chapter comes from a 2002 article by DougBowman, Joseph Gabbard, and Deborah Hix that appeared in the journalPresence: “A Survey of Usability Evaluation in Virtual Environments:Classification and Comparison of Methods.” (Presence: Teleoperators andVirtual Environments, 11[4], 404–424).

We thank the coauthors and the MIT Press for their generous permis-sion to reuse the material here. This content is © 2002 MIT Press.

Look for interactions between variables—rarely will a singletechnique be the best in all situations.

Use pilot studies to determine which variables should betested in the main experiment.

384 Chapter 11 Evaluation of 3D User Interfaces

Page 64: 3D User Interfaces: Theory and Practice -

IndexIndex

457

Numbers1-DOF Menus, 261–2622D

device interaction techniques, 19mice as desktop input device, 91–92

2D adaptations, 336–3402D interaction with 3D objects, 339–3402D UI as element of 3D environment,

336–339adapted 2D menus, 260–261interface controls for 3D manipulation, 172overlaying 2D UI on a 3D world, 336overview of, 336

3D mice, 110–114handheld, 111–113overview of, 110user-worn, 113–114

3D soundauralization, 64binaural cues, 60external speaker configurations, 65–66generation, 62head-related transfer functions, 61

headphone configurations, 64–65localization cues, 59–60reverberation cues, 62sampling, 63–64sound intensity cues, 62spectral and dynamic cues, 61vision and environmental familiarity

cues, 623D UIs. See 3D user interfaces3D user interfaces (3D UIs)

applications, 8–9, 22–25AR interfaces, 394–395defined, 7design strategies. See Design strategiesfuture of. See Future of 3D UIshistory of, 11–13importance of, 4–6influences on design of, 15–18interaction techniques, 19manipulation. See Selection and manipu-

lationmathematics. See Mathematics of 3D UIssoftware tools, 21

Page 65: 3D User Interfaces: Theory and Practice -

3D user interfaces (3D UIs) (cont.)technological background, 16–17terminology, 6–8usability evaluation, 21–22what they are, 3–4

3D widgetsdesktop manipulation, 174–175as graphical menu, 263–265overview of, 19

6-DOF devicesdesktop input, 95–96recommended reading, 134

AAbsolute amplification, 170Abstract information, 414Accommodation

practical application of 3D displays and, 409

in visual perception, 37, 39Accommodation-convergence mismatch, 39,

409Acoustic shadows, 60Acoustic tracking devices, 98–99Active input devices, 89Active travel techniques, 189Adapted 2D menus, 260–261Aerial perspective, 36Aesthetics, of 3D development, 342–344Affordances, manipulation, 148Age, as basis of user groups, 327Agents, in AR, 403–404Aggregation of techniques, 3D manipulation,

163–164Alphanumeric symbols, 293Ambient sound, 67Amplitude panning, 66Anaglyphic stereo, 41–42Annotation

3D sound applications, 67design, 288Virtual Annotation system, 306

Aperture, pointing techniques, 153–156Application-level factors, system control, 258Application-specific manipulation tasks, 143Applications

3D sound, 66–673D UIs, 22–25killer apps, 417–418list of common, 8–9mapping to devices, tasks, and techniques,

409AR (augmented reality)

advantages/disadvantages of tangible AR,402–403

agents, 403–404augmented surfaces, 395–397data browsers, 391–394defined, 7design of tangible AR, 398–400interfaces, 394–395overview of, 387–389recommended reading, 406tangible interaction and, 397–398time-multiplexed interaction in tangible

AR, 400–402transitional interfaces, 404–405virtual interfaces brought into real world

with, 390–391what it is, 389–390

ARCBALL, 177–179, 219Architecture

3D environments used for, 23design, 241recommended reading, 347

Arm exoskeleton, 73Arm-mounted displays, 52–54

advantages/disadvantages, 53–54, 79overview of, 52–53

Art, in 3D environments, 23Artificial cues, wayfinding

audio and olfactory, 250compasses, 246–247landmarks, 248maps, 242–246

458 Index

Page 66: 3D User Interfaces: Theory and Practice -

overview of, 242reference objects, 248signs, 247–248

Asymmetric bimanual tasks, 324Asymmetric bimanual techniques, 325–327Audio cues, wayfinding, 236, 250Auditory displays

3D applications, 66–673D sound generation, 623D sound localization cues, 59–603D sound sampling and synthesis, 63–64auralization, 64binaural cues, 60external speakers, 65–66head-related transfer functions, 61headphones, 64–65overview of, 59recommended reading, 84–85reverberation cues, 62sound intensity cues, 62spectral and dynamic cues, 61vision and environmental familiarity cues, 62

Augmented reality. See AR (augmented reality)Augmented surfaces, 395–397

disadvantages, 397examples of, 395–396tangible interaction and, 397

Auralization, 3D sound, 64Autostereoscopic displays, 56–59

3D imagery and, 59choosing them, 80holographic displays, 58lenticular displays, 56overview of, 56recommended reading, 84volumetric displays, 57–58

BBat tracking device, 111Bicycles, 198Bimanual interaction. See Two-handed inter-

action

Binaural cues, 3D sound localization, 60Binocular disparity cues, visual depth, 38Binocular Omni-Orientation Monitor

(BOOM), 52Binocular rivalry, 38Bioelectric input devices, 120Body-referenced haptic devices, 73–74, 81Body-referenced menus, 266BOOM (Binocular Omni-Orientation Moni-

tor), 52Brain input devices, 120–121Bulldozer technique, two-joystick navigation,

222Button devices, 128

CC3 (command and control cube), 264–265Cabling

as tether and encumbrance in 3D inter-faces, 408

user comfort and, 329walking and, 192

Camera-based recognition, of gestures, 272Camera-in-hand steering, 203–204Canonical manipulation tasks, 141–143Cartoon rendering, 342–343CAT (Control Action Table), 117–118Cathode ray tube (CRT), 50CAVE, 43CavePainting Table, 116–117Chameleon system, 392Chord keyboard, 91, 296–297Cirrin text input technique, 300–301Classification of techniques. See also

Taxonomiesevaluation, 367–369manipulation, 147–150system control, 259travel, 189–192

Clutching, 146, 180, 277Cognition

3D UIs and, 15

Index 459

Page 67: 3D User Interfaces: Theory and Practice -

Cognition (cont.)cognitive ability as basis of user groups, 328recommended reading, 253–254

Cognitive maps, 230Cognitive walkthrough, 355Collaboration

3D environments used for, 233D sound applications for, 673D UI applications for, 9between multiple users, 414–415

Colocated widgets, 263Color, in real-world design, 241Comfort, 3D UI design, 328–330Command and control cube (C3), 264–265Commands, system control, 255–256Communication between users, symbolic

input, 289Comparative evaluation, 22, 356Compasses, wayfinding, 246–247Compliance

directional, 170–171, 318feedback displacement and, 317nulling, 171, 318real world with virtual world, 277spatial, 318temporal, 318–319

Computer-driven Upper Body Environment(CUBE), 43

Computers. See also HCI (human-computerinteraction)

connecting home-brewed input devices to,124–126

graphics, 83–84Constraints

3D UI design, 322–323defined, 322disadvantages, 323on DOF, 339manipulation, 149in semiautomated steering, 206types/uses of, 322–323

Construction, 3D environments used for, 23Context-aware information displays, 392

Context-sensitivity, of widgets, 263Control Action Table (CAT), 117–118Control-body linkage, 257Control-display mappings

defined, 27simple virtual hand and, 159–160as software component of interaction

techniques, 135Control, manual

dimensions, 144–145recommended reading, 181

Controls2D controls for 3D manipulation, 172system. See System control

Convergencepractical application of 3D displays and, 409in visual perception, 37, 39

Cooperative manipulation, 324Costs

analysis in sequential approach, 380–381analysis in testbed approach, 380–381quantifying benefits of 3D UIs, 416–417

Cross-task techniques, 211, 214, 225, 412, 413Crosstalk, 65CRT (cathode ray tube), 50CUBE (Computer-driven Upper Body

Environment), 43Cubic Mouse, 111–112, 277Curvature, of travel paths, 187Cybersickness

feedback and, 317FOV (field of view) and, 235short sessions as solution for, 330surround-screens and, 45usability evaluation and, 22, 359, 364

Cycles, physical locomotion, 198–199

DDasher text input technique, 300–301Data browsers, 391–394Data gloves

bend-sensing, 106–108

460 Index

Page 68: 3D User Interfaces: Theory and Practice -

combining bend-sensing and pinch inputdata, 109–110

overview of, 106Pinch Gloves, 108–109recommended reading, 134

Decision-making, wayfinding, 230Degrees of freedom. See DOF (degrees of

freedom)Demonstrations, as evaluation method, 357Depth cues, visual, 34–40

binocular disparity and stereopsis, 38monocular, static, 34–36motion parallax, 37–38oculomotor, 37recommended reading, 84relevance, 38–40

Designannotation of, 288architecture, 241gestural commands, 272–273graphical menus, 265influences of 3D UIs on, 15–18real-world principles for, 241–242system control tools, 276–277tangible AR, 398–400UIs (user interfaces), 3voice commands, 268–269

Design guidelinesevaluation, 382–384evaluation leads to creation of, 350input devices, 126–133landmarks, 249maps, 243-245output devices, 77–83rotation interaction techniques, 169selection and manipulation techniques,

179–181symbolic input, 306–310system controls, 280–282travel, 222–226wayfinding, 251–253

Design strategies, 3D UIs, 313–3472D UI adaptations, 336–340

application design, 8approaches, 19–20constraints, 322–323designing for humans, 314development process, 345–347feedback compliance, 317–319feedback dimensions, 316–317feedback overview, 315–316feedback substitution, 319–320future developments, 412–415human factors, 315invention, 314–315, 331magic and aesthetics, 340–344overview of, 19–20, 313–314passive haptic feedback, 320–322real-world metaphors, 333–335reality simulation, 331–333recommended reading, 347two-handed control, 323–327user comfort, 328–330user groups, 327–328

Designing for humans. See Human-baseddesign

Desktop camera control techniques, 220–222Desktop devices, 90–96

2D mice and trackballs, 91–926-DOF devices, 95–96joysticks, 93–95keyboards, 91overview of, 90–91pen-based tablets, 92–93

Desktop manipulation techniques3D widgets, 174–175ARCBALL technique, 177–179overview of, 171–174Virtual Sphere techniques, 175–177

Developmentaesthetics, 342–344environment for 3D UIs, 413future of 3D UIs, 412–415iterative, 311, 350process for 3D UIs, 345–347tools for 3D applications, 21

Index 461

Page 69: 3D User Interfaces: Theory and Practice -

Device-independence, 413–414Device-referenced menus, 266Devices

communication with, 28desktop. See Desktop deviceseye tracking, 105–106input. See Input deviceslocomotion, 197–199, 226output. See Output devicesphysical-force-based, 220placement, 145–146tracking. See Tracking devicesusability evaluation of, 21

Digital ink, 302, 309Digital voice, 306Dimensions, of feedback, 316–317Direct human input devices, 118–121

bioelectric input, 120brain input, 120–121overview of, 118–119speech input, 119–120

Direction/target selection, travel, 190Directional compliance, 170–171, 318Direct manipulation, 158–162Display devices. See also Output devices

3D, 17–18AR systems, 390overview of, 29

Display geometry (screen shape), 33Distance, as characteristic of travel, 187Districts, legibility techniques, 240DOF (degrees of freedom)

6-DOF devices, 95–96constraints used to reduce, 323, 339control dimensions and, 144–145defined, 6Go-Go technique and, 162input devices, 88terrain following and, 188

Dynamic alignment tools, 323Dynamic perceptual cues

sound localization, 61visual, 38

EEdges, legibility techniques, 240Education, 3D environments used for, 23Egocentric reference frames, wayfinding,

232–234Egomotion, 232Entertainment, 3D environments used

for, 23Environment design, wayfinding

legibility techniques, 239–241overview of, 239real-world design principles, 241–242

Environmental familiarity cues, 3D soundlocalization, 62

Ergonomicshaptic displays, 71input device, 127visual displays, 33–34

Evaluation. See Usability evaluationEvaluators, 351, 362–363. See also Usability

evaluationExocentric reference frames, wayfinding,

232–234Exploration tasks

travel, 185wayfinding, 231

Extensible 3D (X3D) standard, 24External speakers, 65–66, 80Eye tracking

devices, 105–106recommended reading, 134

FFatigue issues, 330Feedback

dimensions, 316–317overview, 315–316passive haptic, 320–322Pinch Keyboard and, 297recommended reading, 347system control and, 285

Feedback compliance, 317–319

462 Index

Page 70: 3D User Interfaces: Theory and Practice -

Feedback displacement, 317Feedback substitution, 67, 316, 319–320Field of regard (FOR), 31–32, 44Field of view. See FOV (field of view)Filenames, symbolic input, 289FingerSleeve, 113–114Fishtank virtual reality, 42Fitts’s Law, 132, 134, 350, 368Fixed object manipulation, travel,

215–216Flashlight, pointing techniques, 153–154Flex and Pinch input system, 109–110Fly Mouse, 99, 111Flying carpet technique, 342FOR (field of regard), 31–32, 44Force control, 145Force-feedback steering wheels, 72Force-reflecting joysticks, 72, 220Form-factors, 145–146Formative evaluation, 22, 355–356FOV (field of view)

surround-screens and, 44visual displays and, 31–32wayfinding support, 235

Future of 3D UIs, 407–427design and development issues,

412–415interaction techniques, 410–412killer apps, 417–418quantifying benefits of, 416–417standards, 417technology issues, 407–409usability evaluation issues, 415–416

GGAITER system, 195GaitMaster, 197Gaze-directed steering, 200–201Gaze direction, 106Geometrical coherence, 322Gestural commands

design and implementation, 272–273

input with, 272overview of, 270techniques, 271–272

Gestures and datagloves, 106–107Gesture-based symbolic input, 303–304Global landmarks, 249Global positioning systems. See GPS (global

positioning systems)Glove-based recognition, gestures, 272GloveTalk systems, 303Go-Go technique, 160–162GPS (global positioning systems)

AR systems and, 392hybrid trackers and, 105tracking and, 194–195

Grabbing the air, travel, 214–215Graffiti, 300Graphical menus

1-DOF menus, 261–2623D widgets, 263–265adapted 2D menus, 260–261overview of, 260placement, 265–266practical applications of, 267–268representation and structure of,

266–267selection of, 266TULIP menus, 262–263

Graphical user interfaces (GUIs), 11–12Graphics

3D UIs drive research in, 243D interactive, 16standards, 24

Grasp, 145–146, 181Ground-referenced haptic devices,

72–73, 81Guiard’s principles, two-handed interaction,

324–325Guided navigation technique, 206–207Guidelines. See Design guidelinesGuidelines-based expert evaluation,

355GUIs (graphical user interfaces), 11–12

Index 463

Page 71: 3D User Interfaces: Theory and Practice -

HHand-centered object manipulation extending

ray casting (HOMER), 164–165Hand-force-feedback displays, 73–74Handheld 3D mice, 111–113Handheld widgets, 262Hands-free 3D interface, 410Haptic cues

human motor subsystem, 69–70kinesthetic, 69overview of, 68tactile, 68–69

Haptic displays, 68–773D applications, 77body-referenced, 73–74combination devices, 75ergonomics, 71ground-referenced, 72–73guidelines for choosing, 81haptic cues, 68haptic presentation capability, 70human motor subsystem and, 69–70kinesthetic cues, 69overview of, 68passive devices, 75–76recommended reading, 85resolution, 71tactile cues, 68–69tactile devices, 74–75types of, 71–72

Haptic perception, 69Haptic presentation capability, 70Haptic rendering software, 68Hardware. See Output devices, Input devicesHARK, 268HCI (human-computer interaction)

3D interaction and, 5applying to VR systems, 13defined, 6impact of 3D UIs on, 25principles, 15–16

Head-crusher variation, image-plane tech-niques, 157–158

Head-mounted displays. See HMDs (head-mounted displays)

Head-mounted projective display (HMPD), 50Head-referenced menus, 266Head-related transfer functions (HRTFs), 61,

63–64Head tracking, viewpoint orientation,

217–218Headphones, 64–65, 80Health, sanitation issues with public systems,

330Help, 3D sound applications, 67Hemispherical displays, 47–49, 79Heritage sites, 3D applications for, 9Heuristic evaluation

of 3D UIs, 415as evaluation method, 355selecting heuristics for, 365

HiBall tracking system, 193–194HMDs (head-mounted displays), 49–52

advantages of, 50–51AR displays and, 390arm-mounted compared with, 53disadvantages, 51–52guidelines for choosing, 79keyboards and, 91overview of, 49recommended reading, 84types of, 50

HMPD (head-mounted projective display), 50Holographic displays, 58Homemade input devices

connecting to computers, 124–126strategies for building, 122–124

HOMER (hand-centered object manipulationextending ray casting), 164–165

Homogeneous coordinates, 427HRTFs (head-related transfer functions), 61,

63–64Human-based design

constraints, 322–323feedback compliance, 317–319feedback dimensions, 316–317

464 Index

Page 72: 3D User Interfaces: Theory and Practice -

feedback generally, 315–316feedback substitution, 319–320overview of, 314–315passive haptic feedback, 320–322two-handed interaction, 323–327user groups, 327–330

Human-computer interaction. See HCI(human-computer interaction)

Human factors3D UI design, 314, 315system control, 257, 285

Human motor subsystem, 69–70Hybrid haptic devices, 75, 81Hybrid tracking devices, 103–105

IIHL (inside-the-head localization), 65, 66IID (interaural intensity difference), 60–61The Image of the City (Lynch), 239Image-plane, pointing techniques, 156–158Immersion

importance of, 4quantifying the benefits of, 416–417sensory, 408with HMDs, 50

Inertial tracking devices, 100–101Information-rich virtual environment (IRVE),

414Input conditions, travel, 190Input devices, 87–133

3D, 173D mice, 110–114communication with, 28defined, 6desktop. See Desktop devicesdirect human input, 118–121evaluating, 132guidelines for choosing, 126–128home-made, 122–126modalities for virtual and physical objects,

395overview of, 87–90

recommended reading, 133–134selection and manipulation, 143–144special-purpose, 114–118system control and, 257–258taxonomies, 128–132tracking. See Tracking devicesuser comfort and, 329

Inside-out approach, wide-area tracking, 193

Inside-the-head localization (IHL), 65, 66Instantaneous gestures, gesture-based

symbolic input, 303–304Instrumental feedback, 317Integrated camera controls, 220–222Integration, of control, 144–145Intelligent constraints, 323Interaction aids, 20Interaction seams, 395Interaction Slippers, 115–116Interaction style, 256Interaction techniques

3D interaction, 7, 19defined, 6–7feedback and, 316future of 3D UIs and, 410–412HCI and 3D interaction, 5integration for multiple tasks, 411–412list of, 19–20mapping to applications, tasks, and

devices, 409modeling and, 137navigation (travel). See Travelnavigation (wayfinding). See Wayfindingoptimizing for specific domains, 412overview of, 135–137seamless 3D UIs, 413selection and manipulation. See Selection

and manipulationsymbolic input. See Symbolic inputsystem control. See System controlusability evaluation, 22

Interaural intensity difference (IID), 60–61Interaural time difference (ITD), 60–61

Index 465

Page 73: 3D User Interfaces: Theory and Practice -

Interviews, as evaluation method, 357Inventing 3D UIs

2D UI adaptations, 336–340magic and aesthetics, 340–344overview of, 314–315, 331real-world metaphors, 333–335simulating reality, 331–333

IRVE (information-rich virtual environment),414

Isometric joystick, 94–95Isomorphism

in 3D design, 3313D manipulation and, 147–148simple virtual hand and, 160

Isotonic joystick, 94ITD (interaural time difference), 60–61Iterative development, 311, 350

JJoysticks

as desktop input device, 93–95force-reflecting, 72isometric, 94–95two-joystick navigation, 222velocity control, 220

KKeyboards, 294–299

chord keyboard, 296–297comparing techniques, 307–308as desktop input device, 91low key-count keyboards, 295miniature keyboards, 294–295overview of, 294Pinch Keyboard, 297–298QWERTY layout, 290–291soft keyboards, 298–299

Kinesthetic systemcues, 69haptic perception and, 69

LLabels, symbolic input, 289Landmark knowledge, as spatial knowledge,

232Landmarks

artificial landmarks as wayfinding aid, 248design guidelines, 249global and local, 249legibility techniques, 240

Latency, temporal incompliance, 318–319Law of steering, 134Layout sketches, 251LCD (liquid crystal displays)

HMDs use of, 50refresh rates, 40

Leaning curves, keyboards, 306Legibility techniques, 239–241Lego Interface Toolkit, 124Lenticular displays, 56Light-scanning displays. See VRDs (virtual

retinal displays)Light transfer, visual displays, 33Lighting, in visual depth perception, 36Linear perspective, visual depth perception,

35–36Liquid crystal displays (LCD)

HMDs use of, 50refresh rates, 40

Local landmarks, 249Localization, sound

3D applications facilitating, 67binaural cues, 60defined, 59head-related transfer functions, 61reverberation cues, 62sound intensity cues, 62spectral cues, 61vision and environmental familiarity

cues, 62Locator devices, 128Locomotion devices. See also Physical loco-

motion

466 Index

Page 74: 3D User Interfaces: Theory and Practice -

cycles, 198–199GaitMaster, 197omni-directional treadmill, 196–197recommended reading, 226Uniport, 198–199walking-in place, 197–198

Low key-count keyboards, 295

MMagic (nonisomorphism)

in 3D design, 331interaction techniques, 340–342

Magnetic tracking devices, 97–98Maneuvering tasks

travel, 186–187wayfinding, 231

Manipulation. See Selection and manipu-lation

Manual manipulation travel techniques,214–216

fixed object manipulation, 215–216grabbing the air, 214–215

Maps, 242–246examples, 246legibility of, 244mental notations and, 244orientation of, 243–244overview of, 242scale of, 243size of, 245target specification, 211–213you-are-here (YAH), 243

Markup, symbolic input, 290, 294Mathematics of 3D UIs, 419–427

matrices, 422–423points, 421–422quaternions, 424–426recommended reading, 426–427scalars, 420vectors, 420–421

Matrices, 422–423, 426–427

MAUVE (Multi-Attribute Usability eval-uation tool for Virtual Environments), 366

Mechanical tracking devices, 98Medicine, 3D environments used for, 23MEMS (microelectronic mechanical systems),

100MetaDesk system, 395Metaphors

classification of manipulation techniquesby, 148

classification of travel techniques by,191–192

real-world metaphors as basis of 3D UIdesign, 333–335

Metricssystem performance, 357task performance, 358testbed approach, 372user preference, 358–360

Metropolis keyboard, 299Mice

2D, 91–923D. See 3D mice

Microelectronic mechanical systems (MEMS), 100

Microphones, 119MIDI (Musical Digital Instrument Device

Interface), 125–126Migration, between platforms, 413–414Mimic gestures, 272Miniature keyboards, 294–295Modeling tools, 21Monitoring input devices, 89Monitors, 40–42

guidelines for choosing, 78overview of, 40–41stereo glasses used in conjunction with,

41–42Monocular (static) cues, visual depth,

34–36, 39Motion cues, wayfinding support, 235–236

Index 467

Page 75: 3D User Interfaces: Theory and Practice -

Motion parallax cues, visual depth, 37–38Motion trackers, 96–105

acoustic tracking, 98–99hybrid tracking, 103–105inertial tracking, 100–101magnetic tracking, 97–98mechanical tracking, 98optical tracking, 101–103overview of, 96–97recommended reading, 133

Motor abilities, as basis of user groups, 328MR (mixed reality). See also AR (augmented

reality)application evaluation, 415–416defined, 7–8relationship of AR and VEs in, 388seamless interaction with real and virtual

world, 411Multi-Attribute Usability evaluation tool for

Virtual Environments (MAUVE), 366Multimethod evaluation approaches

comparing, 378–382overview of, 369sequential approach, 375–378testbed approach, 370–374

Multimodal interaction, 20advantages of, 278–279combining techniques for, 280put-that-there technique, 279

Multisensory output, wayfinding, 236–237Musical Digital Instrument Device Interface

(MIDI), 125–126

NNaive search tasks, travel, 186Naturalism

in 3D design, 331in real-world design, 241

NaviCam system, 392Navigation (travel). See TravelNavigation (wayfinding). See WayfindingNodes, legibility techniques, 240

Nonisomorphic interaction3D manipulation and, 148object rotation and, 168viewpoint rotation and, 218–219

Nonisomorphism (magic)in 3D UI design, 331interaction techniques, 340–342

Nulling compliance, 171, 318Numeric input

gesture-based symbolic input, 303symbolic input, 310

OObject manipulation. See also Selection and

manipulationbimanual interaction and, 326symbolic input and, 289

Occlusionmap size and, 245in vision-based trackers, 103in visual depth perception, 35

Oculomotor cues, visual depth, 37, 39ODT (omnidirectional treadmill), 196Olfactory cues, wayfinding, 250Omnidirectional treadmill (ODT), 196Operational feedback, 317Optical see-through displays, 390Optical tracking devices, 101–103, 134Orbital viewing, travel viewpoint orientation,

218Orientation. See also Viewpoint orientation,

travelof maps, 243–244spatial orientation, 230

Osmose, 343–344Output devices, 29–86

AR systems, 390auditory displays. See Auditory displayscommunication with, 28defined, 6guidelines for choosing, 77–83haptic displays. See Haptic displays

468 Index

Page 76: 3D User Interfaces: Theory and Practice -

overview of, 29–30recommended reading, 84–86user comfort and, 329visual display. See Visual displays

Outside-in approach, wide-area tracking, 193

PParameters, symbolic input, 289Passive haptic devices, 75–76, 81Passive haptic feedback. See Props (passive

haptic feedback)Passive input devices, 89Passive travel techniques, 189Path analysis, 251Paths, legibility techniques, 240Paths, travel

curvature of, 187drawing a path in route planning,

207–208marking points along a path in route

planning, 208–209PDAs (personal digital assistants), 92Pen-and-tablet technique, 258, 276, 337–339,

411Pen-based symbolic input, 300–302

overview of, 300pen-stroke gesture recognition, 300–302unrecognized pen input (Digital Ink),

302Pen-based tablets

as desktop input device, 92–93Pen-stroke gesture recognition, 300–302Perception

3D UIs and, 15auditory, 59–62haptic, 68–70perceptual ability as basis of user groups,

328recommended reading, 84, 253–254visual, 34–40

Perceptual information, 414

Performance metrics. See MetricsPerformance models

evaluation based on, 365evaluation leads to, 350

Personal digital assistants (PDAs), 92Phicons (physical icons), 395, 397Physical characteristics, as basis of user

groups, 328Physical environment, evaluation issues,

360–362Physical-force-based devices, 220Physical icons (phicons), 395, 397Physical locomotion, 192–199

cycles, 198–199simulated walking, 196–198walking, 192–195walking in place, 195

Physical props. See Props (passive hapticfeedback)

Physical reality constraints, 322Physical travel techniques

classification of travel techniques, 189Pick devices, 128Pinch Gloves, 108–109, 297–298, 304Pinch Keyboard, 297–298, 304Placement

of graphical menus, 265–266of system control, 284

Platforms, interoperability between, 413–414

Pointing-based steering, 201–202Pointing techniques, 150–158

aperture, 153–156flashlight, 153–154image-plane, 156–158overview of, 150–151ray-casting, 151–153two-handed pointing, 153

Points, mathematics, 421–422Polarization multiplexing, 41Polyshop, 271, 327Position control vs. force control, 145Positioning, as manipulation task, 142

Index 469

Page 77: 3D User Interfaces: Theory and Practice -

Postures, 106–108, 270. See also Gesturalcommands

Precision grasp, 145–146Presence

evaluators interrupting, 362importance of, 4metrics for, 359recommended reading, 254usability evaluation and, 22user-centered wayfinding support, 237wayfinding and, 229

Primed search tasks, travel, 186Procedural (or route) knowledge, 232Props (passive haptic feedback), 320–322

as design technique, 321disadvantages, 321–322manipulation performance and, 146overview of, 320physical props compared with virtual

tools, 274steering techniques, 204transparent props for 2D interaction in

VE, 339–340Prototypes

3D UI applications, 8design and development, 311evaluation, 354

ProVIT case study, 282–285Psychiatry

3D environments for, 23applications for, 8

Psychology, 25Public systems, 330Push-to-talk schemes, 119

QQuaternions, 168–169, 177–179, 424–426, 427Questionnaires, as evaluation method, 356QWERTY layout

physical keyboards, 290–291virtual keyboard, 299

RRadio frequency identification (RFID), 275Ray-casting

pointing technique, 151–153ZoomBack technique, 213

Reactive feedback, 317Real world

AR brings virtual interfaces into, 390–391compliance, 277design principles, 241–242metaphors for 3D design, 333–335recommended reading, 347relevance of 3D interaction, 4–5seamless interaction with virtual world, 411

Recommended reading3D UI design, 347AR (augmented reality), 406evaluation, 384input devices, 133–134mathematics of 3D UIs, 426–427output devices, 84–86selection and manipulation, 181–182symbolic input, 310system control, 285travel, 226wayfinding, 253–254

Reference objects, wayfinding, 248Refresh rate

monitors, 40visual displays, 33

Registration, AR systems, 390Representation, system control, 266–267Requirements gathering, for design and de-

velopment, 311Resolution

haptic displays, 71spatial resolution of visual display, 32

Responsive Workbench, 46–47Reverberation cues, 3D sound localization, 62RFID (radio frequency identification), 275Ring menu, 261Ring Mouse, 113

470 Index

Page 78: 3D User Interfaces: Theory and Practice -

Rotationbimanual interaction and, 326as manipulation task, 142

Rotation interaction techniques, 168–171absolute and relative mappings, 170ARCBALL, 177–179designing rotation mappings, 169overview of, 168–169usability properties of, 170–171

Rotational mappings, 169, 182Route-planning

drawing a path, 207–208manipulating a user representation,

209–210marking points along a path, 208–209overview of, 206–207

Route (procedural) knowledge, 232

SSanitation issues, public systems, 330Scalars, mathematics, 420Scale, of maps, 243Scaled-world grab, 165–166, 216–217Scaling, bimanual interaction, 327Scenarios

as evaluation tool, 352of symbolic input, 288–290

Scientific visualization, 8–9Screen shape (display geometry), 33Search tasks

strategies for, 237–238travel task, 185–186wayfinding task, 231

Selection and manipulation, 139–1823D manipulation tasks, 140–1413D widgets, 174–175application-specific manipulation tasks, 143ARCBALL technique, 177–179canonical manipulation tasks, 141–143classification of manipulation techniques,

147–150

combining techniques for, 163–164control dimensions and integrated control,

144–145design guidelines, 179–181desktop manipulation techniques, 171–174device placement and form-factor, 145–146fishing-reel technique, 158flashlight and aperture techniques,

153–156force vs. position control, 145Go-Go technique, 160–162of graphical menus, 266HOMER, 164–165image-plane techniques, 156–158input devices and, 143–144overview of, 139–140pointing techniques, 150–151ray-casting techniques, 151–153recommended reading, 181–182rotation interaction techniques, 168–171scaled-world grab, 165–166simple virtual hand, 159–160two-handed pointing, 153used for travel, 211virtual hand techniques, 158–159Virtual Sphere techniques, 175–177Voodoo Dolls, 166–168World-in-Miniature (WIM), 162–163

Semiautomated steering, 206Senseboard, 299Senses

feedback dimensions, 316sensory immersion, 408sensory substitution, 67, 316, 319–320

Sequential approach, 375–378benefits of, 381case studies, 377–378comparing with testbed, 378–382cost benefit analysis with, 376–377costs of, 380–381evolution of, 375goals of, 378–379

Index 471

Page 79: 3D User Interfaces: Theory and Practice -

Sequential approach (cont.)results of, 381–382serial nature of, 376steps in, 375–376when to use, 379–380

Shadowsfor object manipulation, 333in visual depth perception, 36

ShapeTape, 114–115Sign language, 272, 303Signs, wayfinding aids, 247–248Simple virtual hand, 159–160Simulated walking, 196–198Simulation

3D environments used for, 22–233D UI design based on, 331–333simulated walking, 196–198simulator sickness, 359–360simulator systems, 18

Single-character, speech recognition, 305Situation awareness, 230SKETCH, 273, 340Soft keyboards, 298–299Software

haptic rendering, 68speech recognition, 268tools, 21, 68

Sonification, 67Sound. See 3D soundSound intensity cues, 3D sound localization,

62Sound localization. See Localization, soundSpace-multiplexed interactions, 400Spatial compliance, 317–318Spatial knowledge

transferring to real world, 228types of, 231–232

Spatial orientation, 230Spatial perception, of humans, 15Spatial resolution

haptic displays, 71visual displays, 32

Spatial seams, augmented surfaces, 397

Speakers, external, 65–66, 80Special-purpose input devices, 114–118

CavePainting Table, 116–117Control Action Table (CAT), 117–118Interaction Slippers, 115–116overview of, 114ShapeTape, 114–115

Specified trajectory movement, wayfinding,231

Spectral cues, 3D sound localization, 61Spectral multiplexing, 41Speech-connected gestures, 271Speech input devices, 119–120Speech recognition

design and implementation, 268–269engine, 268practical application, 269–270single-character, 305speech-based symbolic input, 304–305unrecognized speech input, 306whole-word, 305

Spotlight, pointing techniques, 153–156Standards, 3D UI, 24, 417Static (monocular), visual depth cues, 34–36,

39Statistical variability, 366–367Steering techniques, 199–206

camera-in-hand steering, 203–204gaze-directed steering, 200–201overview of, 199–200physical steering props, 204pointing-based steering, 201–202semiautomated steering, 206torso-directed steering, 202–203virtual motion controller, 205–206

Stereo glasses, 41–42Stereopsis, visual depth cue, 38Stereoscopic viewing, 41Sticky-finger technique, 157String devices, 128Stroke devices, 128Stroke, in pen-stroke gesture recognition,

300

472 Index

Page 80: 3D User Interfaces: Theory and Practice -

Structure, system control, 266–267Subjects, 351. See also UsersSummative evaluation, 22, 356Surface-based recognition, 272Surround-screen displays, 43–46

active viewpoints, 45–46advantages, 44disadvantages, 45guidelines for choosing, 78overview of, 43types of, 43–44

Survey knowledge, 232Sutherland, Ivan, 12Swept volume techniques, 57Symbolic gestures, 272Symbolic input, 287–312

for 3D UIs, 291–292beyond text and number entry, 310design guidelines, 306–310gesture-based, 303–304history of, 290–291importance of, 288keyboard-based, 294–299need for development in, 410–411overview of, 137, 287–288pen-based, 300–302recommended reading, 310scenarios for, 288–290speech-based, 304–306tasks, 293–294techniques, 294

Symbols, alphanumeric, 293Symmetric bimanual tasks, 324Symmetric bimanual techniques, 327System control, 255–285

1-DOF menus, 261–2623D widgets, 263–265adapted 2D menus, 260–261classification of techniques, 259defined, 256design and implementation, 265design guidelines, 280–282gestural commands, 270–273

graphical menus, 260human factors, 257, 285input devices, 257–258multimodal techniques, 278–280overview of, 137, 255–257placement, 265–266practical applications of, 267–268ProVIT case study, 282–285recommended reading, 285representation and structure, 266–267selection, 266system- and application- level factors, 258tools, 274–278TULIP menus, 262–263voice commands, 268–270

System-level factors, 258System performance, 357Systems, translations between users

and, 27

TTactile augmentation, 321Tactile maps, wayfinding, 236–237Tactile system

cues, 68–69guidelines for choosing tactile devices, 81haptic devices, 74–75haptic perception and, 69

Tangible ARadvantages/disadvantages, 402–403design of, 398–400overview of, 397–398time-multiplexed interaction in, 400–402

Tangible interaction, 397Tangible user interfaces (TUIs), 275–276, 397Target-based travel

map-based or WIM-based target specifi-cation, 211–213

overview of, 210–211visibility of target, 187–188ZoomBack technique, 213

Task analysis, 352

Index 473

Page 81: 3D User Interfaces: Theory and Practice -

Tasksclassification of manipulation techniques

by, 148composite/universal, 19context, 164integrated interaction techniques for mul-

tiple, 411–412mapping to applications, devices, and

techniques, 409manipulation, 140–143parameters, 142–143performance metrics, 358symbolic input, 293–294travel, 185–187unimanual vs. bimanual, 324wayfinding, 231

Taxonomies. See also Classification of tech-niques

building in testbed approach, 370–372as evaluation tool, 352–354input device, 128–132technique decomposition, 353

Technique components, 353Technique integration, 164, 411Technologies

background for 3D UIs, 16–17future of 3D UIs and, 407–409supporting 3D UIs, 5

Teddy, 273Telepresence systems, 18Temporal compliance, 317–318Temporal resolution, 71Testbed approach, 370–374

benefits of, 381case studies, 374comparing with sequential approach,

378–382costs of, 380–381goals of, 378–379initial evaluation, 370outside factors, 372overview of, 370performance metrics, 372

results analysis, 373–374results, applying, 381–382taxonomy, 370–372testbed experiment, 373when to use, 379–380

Text entry, symbolic input, 310Texture gradient, in visual depth percep-

tion, 36Texture, in real-world design, 241–242Think aloud protocol, 361Three-Up, Labels in Palm (TULIP) menu,

262–263Thumbscript, 304Tiles system, 399–400Time-multiplexed interactions, 400–402Time-to-target tests, 251Tools, evaluation, 352–354Tools, system control, 274–278

design and implementation, 276–277overview of, 274practical application, 278techniques, 275–276

Torso-directed steering, 202–203Touch. See Haptic displaysTourism, 3D UI applications for, 9Trackballs, 91–92Tracking

AR systems and, 390GPS, 194–195head tracking, viewpoint orientation,

217–218integrated control for manipulation,

144–145wide-area, 193

Tracking devices, 96–109acoustic tracking, 98–99bend-sensing gloves, 106–108data gloves, 106eye tracking, 105–106hybrid tracking, 103–105inertial tracking, 100–101magnetic tracking, 97–98mechanical tracking, 98

474 Index

Page 82: 3D User Interfaces: Theory and Practice -

motion trackers, 96–97optical tracking, 101–103overview of, 96Pinch Gloves, 108–109

Trails, wayfinding aids, 249Training

3D environments used for, 22–23transfer, 254

Transaural audio, 66Transfer functions, 27, 159Translations, between user and system, 27Travel, 183–226

camera-in-hand steering, 203–204classification of techniques, 188–192cycles, 198–199design guidelines, 222–226drawing a path, 207–208exploration, 185gaze-directed steering, 200–201integrated camera controls, 220–222maneuvering, 186–187manipulating a user representation,

209–210manual manipulation techniques, 214–216marking points along a path, 208–209motor component of navigation, 183overview of, 136, 183–184physical locomotion, 192physical steering props, 204pointing-based steering, 201–202recommended reading, 226relationship to wayfinding, 136–137route-planning, 206–207scale issues, 410search tasks, 185–186semiautomated steering, 206simulated walking, 196–198steering techniques, 199–200target-based, 210–213task characteristics, 187–188tasks, 184–185torso-directed steering, 202–203travel-by-scaling, 216–217

velocity specification, 219–220viewpoint orientation techniques,

217–219virtual motion controller, 205–206walking, 192–195walking in place, 195

Travel-by-scaling techniques, 216–217Treadmill, for simulated walking, 196Triggers, 150TUIs (tangible user interfaces), 275–276, 397TULIP (Three-Up, Labels in Palm) menus,

262–263Two-handed interaction, 323–327

asymmetric techniques, 325–327grabbing the air, 215Guiard’s framework, 324–325in list of interaction techniques, 20overview of, 323–324pointing techniques, 153, 202symmetric techniques, 327tangible AR, 402Voodoo Dolls, 166–168

UUbiComp (ubiquitous computing), 8UIs (user interfaces)

3D. See 3D user interfacesAR, 394–395, 404–405components, 3–4defined, 6design, 3design principles, 15–16graphical, 11–12tangible, 275–276, 397

UniCam, 221Unimanual tasks, 324Uniport, 198–199Unrecognized pen input (digital ink),

302, 309Unrecognized speech input, 306Usability, 7, 351Usability engineering, 311, 384

Index 475

Page 83: 3D User Interfaces: Theory and Practice -

Usability evaluation, 349–3843D UIs, 21–22approaches, 351, 378–382classification of techniques, 367–369defined, 7evaluation type issues, 365–367evaluator issues, 362–363formative vs. summative, 22future of 3D UIs, 415–416guidelines, 382–384heuristics for, 415methods, 351, 354–357metrics, 357–360miscellaneous issues, 367MR applications, 415–416multimethod approaches, 369overview of, 349–350physical environment issues, 360–362purposes of, 350recommended reading, 384sequential approach, 375–378terminology for, 351testbed approach, 370–374tools, 352–354user issues, 363–365

User-centered wayfinding support, 234–235field of view (FOV), 235motion cues, 235–236multisensory output, 236–237presence, 237search strategies, 237–238

User comfort3D UI design and, 328–330metrics for, 359

User groups, 327–328based on age, 327based on perceptual, cognitive, and motor

ability, 328based on physical characteristic, 328based on prior experience, 327–328

User interfaces. See UIs (user interfaces)User task analysis, 352User-worn 3D mice, 113–114

Userscollaboration between multiple, 414–415defined, 351evaluation issues related to, 363–365metrics for user preferences, 358–360sensory immersion, 408translations between systems and, 27

VVariables, in evaluation, 366–367Vectors, 420–421, 426Velocity/acceleration

travel, 190Velocity specification, travel, 219–220VEs (virtual environments). See also VR

(virtual reality)defined, 7evaluation issues, 360–362information-rich virtual environment

(IRVE), 414MAUVE (Multi-Attribute Usability

evaluation tool for Virtual Environ-ments), 366

ViaVoice (IBM), 268Video see-through displays, 390Viewpoint orientation, travel

head tracking, 217–218nonisomorphic rotation, 218–219orbital viewing, 218Virtual Sphere, 219

Virtual Annotation system, 306Virtual body, 237Virtual environments. See VEs (virtual envi-

ronments)Virtual hand techniques, 158–162

Go-Go, 160–162overview of, 158–159simple virtual, 159–160World-in-Miniature (WIM), 162–163

Virtual keyboards. See Soft keyboardsVirtual menus, 326Virtual motion controller (VMC), 205–206

476 Index

Page 84: 3D User Interfaces: Theory and Practice -

Virtual Notepad system, 302, 338Virtual reality. See VR (virtual reality)Virtual retinal displays (VRDs), 54–55, 79Virtual Sphere, 175–177, 219Virtual tools, 275Virtual travel

classification of travel techniques, 189wayfinding aids, 228

Virtual Tricorder, 276, 342Virtual worlds, compliance with real world,

277Vision cues, 3D sound localization, 62Visual depth. See Depth cues, visual displaysVisual displays, 31–59

arm-mounted, 52–54autostereoscopic, 56–59binocular disparity and stereopsis and, 38depth cue relevance, 38–40depth cues, 34display geometry, 33ergonomics, 33–34field of regard and field of view, 31–32head-mounted, 49–52hemispherical, 47–49light transfer, 33monitors, 40–42monocular (static) depth cues, 34–36motion parallax depth cues, 37–38oculomotor depth cues, 37overview of, 31refresh rate, 33spatial resolution, 32surround-screen, 43–46types of, 40virtual retinal, 54–55workbenches, 46–47

Visualization, 17, 23VMC (virtual motion controller), 205–206Voice commands

design and implementation, 268–269overview of, 268practical application, 269–270

Volumetric displays, 57–58, 409

Voodoo Dolls, 166–168VR (virtual reality)

AR as alternative to, 387–388defined, 7history of 3D UI and, 12–13seamless interaction with real world, 411surround-screens and, 43technological background for 3D UIs, 18

VRDs (virtual retinal displays), 54–55, 79VRML specification, 24

WW3C (World Wide Web Consortium), 24Walking, 192–195Walking in place, 195Wayfinding, 227–254

artificial cues, 242artificial landmarks, 248–249audio and olfactory cues, 250compasses, 246–247design guidelines, 251–253egocentric and exocentric reference frames,

232–234environment-centered support, 239environment design, 239evaluating wayfinding aids, 250–251field of view (FOV), 235legibility techniques, 239–241maps, 242–246motion cues, 235–236multisensory output, 236–237overview of, 136, 227–229presence and, 237real-world design principles, 241–242recommended reading, 253–254reference objects, 248relationship to travel, 136–137search strategies, 237–238signs, 247–248spatial knowledge and, 231–232tasks, 231theoretical foundations of, 229–230

Index 477

Page 85: 3D User Interfaces: Theory and Practice -

Wayfinding (cont.)trails, 249user-centered support, 234–235

Whole-wordspeech recognition, 305

Wide-area tracking, 193Widgets

3D, 19, 174–175, 263–265handheld, 262

WIM (World-in-Miniature)as an AR technique, 392–393as a manipulation technique, 162–163as a route-planning technique, 209–210as a target-specification technique, 211–213

WIMP (windows, icons, menus, andpointers), 91

Windows, icons, menus, and pointers(WIMP), 91

Wizard of Oz approach (WOZ), 354

Word-level techniques, symbolic input, 300Workbenches, 46–47, 78World-in-Miniature. See WIM (World-in-

Miniature)World-referenced menus, 266World Wide Web Consortium (W3C), 24WOZ (Wizard of Oz) approach, 354

XX3D (Extensible 3D) standard, 24

YYou-are-here (YAH) maps, 243

ZZoomBack technique, 213

478 Index