development of head-mounted - university of central florida

22
Jannick P. Rolland Optical Diagnostics and Applications (ODA) Lab University of Central Florida College of Optics and Photonics, Institute of Simulation and Training, and School of Computer Science University of Central Florida 4000 Central Florida Boulevard Orlando, FL 32816 [email protected] Frank Biocca Media Interface and Network Design (MIND) Labs Michigan State University East Lansing, MI 48824 Felix Hamza-Lup Optical Diagnostics and Applications (ODA) Lab School of Computer Science University of Central Florida 4000 Central Florida Boulevard Orlando, FL 32816 Yanggang Ha Ricardo Martins Optical Diagnostics and Applications (ODA) Lab College of Optics and Photonics University of Central Florida 4000 Central Florida Boulevard Orlando, FL 32816 Presence, Vol. 14, No. 5, October 2005, 528 –549 © 2005 by the Massachusetts Institute of Technology Development of Head-Mounted Projection Displays for Distributed, Collaborative, Augmented Reality Applications Abstract Distributed systems technologies supporting 3D visualization and social collabo- ration will be increasing in frequency and type over time. An emerging type of head-mounted display referred to as the head-mounted projection display (HMPD) was recently developed that only requires ultralight optics (i.e., less than 8 g per eye) that enables immersive multiuser, mobile augmented reality 3D visualization, as well as remote 3D collaborations. In this paper a review of the development of lightweight HMPD technology is provided, together with insight into what makes this technology timely and so unique. Two novel emerging HMPD-based technologies are then described: a teleportal HMPD (T-HMPD) enabling face-to-face communication and visualization of shared 3D virtual objects, and a mobile HMPD (M-HMPD) designed for outdoor wearable visualization and communication. Finally, the use of HMPD in medical visualiza- tion and training, as well as in infospaces, two applications developed in the ODA and MIND labs respectively, are discussed. 1 Introduction With the increase in multidisciplinary workforce and the globalization of information exchange, multiuser work teams are often distributed. Organiza- tions increasingly seek ways to effectively support these teams by allowing them to share and mutually interact with 3D data in a common distributed workspace. As a result there has been increased interest in and reliance upon the use of distributed systems technologies that support multiuser distributed 3D visualization. Furthermore, economic, political, and health concerns may also fuel increased reliance on distributed work environments. The design of mobile and distributed augmented reality (AR) systems is best driven by concrete real world applications testable in real world environments. In this paper we review a research and development program to create and test AR displays and interface designs that support local, distributed, and mobile teams that (1) require immersive interaction with large scale 3D visualizations, but also (2) have full sensory awareness of the physical environment around them, and (3) are able to see and interact face-to-face with local and remote participants. Example applications include collaborative and distributed science and engineering design via 3D visualization, 3D medical visualization and 528 PRESENCE: VOLUME 14, NUMBER 5

Upload: others

Post on 26-Feb-2022

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Development of Head-Mounted - University of Central Florida

Jannick P RollandOptical Diagnostics andApplications (ODA) LabUniversity of Central FloridaCollege of Optics and PhotonicsInstitute of Simulation and Trainingand School of Computer ScienceUniversity of Central Florida4000 Central Florida BoulevardOrlando FL 32816Rollandodalabucfedu

Frank BioccaMedia Interface and NetworkDesign (MIND) LabsMichigan State UniversityEast Lansing MI 48824

Felix Hamza-LupOptical Diagnostics andApplications (ODA) LabSchool of Computer ScienceUniversity of Central Florida4000 Central Florida BoulevardOrlando FL 32816

Yanggang HaRicardo MartinsOptical Diagnostics andApplications (ODA) LabCollege of Optics and PhotonicsUniversity of Central Florida4000 Central Florida BoulevardOrlando FL 32816

Presence Vol 14 No 5 October 2005 528ndash549

copy 2005 by the Massachusetts Institute of Technology

Development of Head-MountedProjection Displays forDistributed CollaborativeAugmented Reality Applications

Abstract

Distributed systems technologies supporting 3D visualization and social collabo-

ration will be increasing in frequency and type over time An emerging type of

head-mounted display referred to as the head-mounted projection display

(HMPD) was recently developed that only requires ultralight optics (ie less

than 8 g per eye) that enables immersive multiuser mobile augmented reality

3D visualization as well as remote 3D collaborations In this paper a review of

the development of lightweight HMPD technology is provided together with

insight into what makes this technology timely and so unique Two novel

emerging HMPD-based technologies are then described a teleportal HMPD

(T-HMPD) enabling face-to-face communication and visualization of shared 3D

virtual objects and a mobile HMPD (M-HMPD) designed for outdoor wearable

visualization and communication Finally the use of HMPD in medical visualiza-

tion and training as well as in infospaces two applications developed in the

ODA and MIND labs respectively are discussed

1 Introduction

With the increase in multidisciplinary workforce and the globalization ofinformation exchange multiuser work teams are often distributed Organiza-tions increasingly seek ways to effectively support these teams by allowingthem to share and mutually interact with 3D data in a common distributedworkspace As a result there has been increased interest in and reliance uponthe use of distributed systems technologies that support multiuser distributed3D visualization Furthermore economic political and health concerns mayalso fuel increased reliance on distributed work environments

The design of mobile and distributed augmented reality (AR) systems is bestdriven by concrete real world applications testable in real world environmentsIn this paper we review a research and development program to create and testAR displays and interface designs that support local distributed and mobileteams that (1) require immersive interaction with large scale 3D visualizationsbut also (2) have full sensory awareness of the physical environment aroundthem and (3) are able to see and interact face-to-face with local and remoteparticipants Example applications include collaborative and distributed scienceand engineering design via 3D visualization 3D medical visualization and

528 PRESENCE VOLUME 14 NUMBER 5

training effective face-to-face distributed business nego-tiation and conferencing distance education and medi-ated governmental services The displays and interfacesare designed to support mobile AR navigation telepres-ence and advanced information systems which we referto as mobile infospaces for applications such as medicaland disaster relief scenarios

The AR systems introduced below aim to create acompelling sense of fully embodied interaction withspaces and people that are not immediately present inthe same physical environment The acceptance andefficiency of distributed collaborative systems mayrequire advanced media technologies that can create astrong sense of social presence defined as the sense ofldquobeing with othersrdquo (Short Williams amp Christie1976 Biocca Harms amp Burgoon 2003) Partici-pants may need to feel that people who are not in thesame room or place have a similar impact on teamprocesses as those who are physically present To cre-ate this effect our research efforts include the devel-opment of technologies that enable both 3D datasetvisualization (Hamza-Lup Davis amp Rolland 2002)together with 3D face-to-face interaction among dis-tributed users (Biocca amp Rolland 2004) and algo-rithms to synchronize shared states across distributed3D collaborative environments (Hamza-Lup amp Rol-land 2004a 2004b) To verify the effect we havedeveloped ways to measure the degree to which team-mates feel socially present with remote collaboratorsBecause there are many acronyms in this paper weprovide a list in Table 1 to assist the reader

In this paper we first discuss the relative advan-tages of head-mounted projection displays (HMPDs)for distributed 3D collaborative AR systems We thenreview differences between eyepiece and projectionoptics and discuss various trade-offs associated withthe technology The exploration of various HMPDdesigns within our laboratories is presented as wellas two novel emerging technologies based on theHMPD the teleportal HMPD (T-HMPD) and themobile HMPD (M-HMPD) Finally a review oftwo collaborative applications medical visualizationand training and the design of infospaces is pro-vided

2 Relative Advantages of Head-MountedProjection Displays for CollaborativeInteractions

Few technologies are well designed to supportdistributed work team interactions with complex 3Dmodels This is one key motivator for the creation ofadvanced image capture and display systems 3D visual-ization devices which have succeeded in penetratingreal world markets have evolved into three formatsstandard monitorsshutter glasses head-mounted dis-plays (HMDs) (Sutherland 1968) and projection-based displays Projection-based displays include video-based virtual environments that use back-projectiontechniques to place users within the environment(Krueger 1977 1985) and rear projection cubes knownas the CAVE (Cruz-Neira Sandin amp DeFanti 1993)

Each of these three common approaches currentlyimposes a significant increase in cost andor limits thequality of social interaction as well as the sense of social

Table 1 List of Acronyms Employed in this Paper

Acronym Full phrase

3D Three-dimensionalACLS Advanced cardiac life supportAR Augmented realityARC Artificial reality centersDOE Diffractive optical elementETI Endotracheal intubationFOV Field of viewHMD Head-mounted displayHMPD Head-mounted projection displayHPS Human patient simulatorLCOS Liquid crystal on siliconM-HMPD Mobile head-mounted projection

displayOLED Organic light emitting displayT-HMPD Teleportal head-mounted projection

displayUIH Ultimate intubation head

Rolland et al 529

presence of distributed team members Monitors withshutter glasses are limited in teamwork capabilities be-cause of the size of the display space and the lack of im-mersion Among HMDs AR displays that rely on opti-cal see-through capability (Rolland amp Fuchs 2001) arebest suited for collaborative interfaces because they en-able the possibility of undistorted 3D visualization of3D models while they also preserve the natural interac-tions multiple users may have within one collaborativesite They are weak however at providing natural inter-actions among multiple users located at remote sitesgiven that they do not support face-to-face interactionsamong distributed users Also unless ultralight weight(ie 10 g) optics can be designed with a reasonableFOV (ie at least 40deg) HMDs will limit usability re-gardless of their type Technologies such as the CAVEand Powerwalls have been especially conceived for thedevelopment of collaborative environments(httppinotlcseumneduresearchpowerwallpowerwallhtml) The strength of these technologies liesin the vivid sense of immersion they provide for multi-ple users However these technologies are also weak atsupporting social interaction for local teams workingwith the models because only one user can view the 3Dmodels accurately without perceptual distortions leav-ing other members of the work team viewing distortedmodels Furthermore the technology does not supportface-to-face interactions among distributed users CAVEtechnology is also typically costly because of the needfor multiple large projectors and they require a largefootprint because of the rear projection on largescreens

An HMPD may be conceptually thought of as a pairof miniature projectors one for each eye mounted onthe head coupled with retro-reflective material strategi-cally placed in the environment Retro-reflective mate-rial has the property that ideally any ray of light hittingthe material is redirected along its incident direction asopposed to being diffused as common to conventionalprojection systems Naturally such an imaging schemeraises two main questions (1) How can a projector beminiaturized to the extent that it can be mounted on

the head and (2) How can retro-reflective material pro-vide imaging capability

Insight into these questions may be reached by com-paring the HMPD imaging capability to existing tech-nology Consider first conventional projector systemssuch as those typically found in conference rooms Thelight projected on the screen reflects and diffuses in alldirections allowing multiple viewers in the room to col-lect a small portion of the light diffused back towardsthem This small amount of diffused light enables all tosee the projected images Such projection systems re-quire extremely high power illumination and thus theirsize They may require dimmed light in the room sothat the diffusing screen is not washed out by ambientlight

Now consider how HMPD uses light The light fromthe head-worn projectors is projected towards opticalretro-reflective screens The light is then redirected in asmall cone back towards the source after reaching theoptical retro-reflective material This maximizes thepower received by the user whose eye (ie the right eyefor the right-eye projected image) is located within thepath of the small cone of reflected light (We shall ex-plain the technology in further detail in Section 3)Therefore in spite of the low power projection systemmounted on the head bright images can be perceivedby not only one user but each user of the screen as theywill not be sharing the same reflected light Outside thepath of reflection no light will be received because thecone of light along the path is small enough to be im-perceptible by the other eye of the user Therefore nolight leakage or cross talk is possible either between theeyes of one user or between users Because there is nocross talk such technology interestingly can also allowfor private or secure information to be viewed in a pub-lic work setting

The HMPD designs have evolved significantly sincethey were conceived Fisher (1996) pioneered the con-cept of combining a projector mounted on the headwith retro-reflective material strategically placed in theenvironment The system proposed was biocular withone microdisplay architecture and the same image waspresented to both eyes Fergason (1997) extended the

530 PRESENCE VOLUME 14 NUMBER 5

conceptual design to binocular displays He also made aconceptual improvement to the technology consistingof adding retro-reflective material located at 90 degreesfrom the material placed strategically in the environ-ment to increase light throughput It is important tonote that a key benefit of the HMPD in the originalFisher configuration is providing natural occlusion ofvirtual objects by real objects interposed between theHMPD and the retro-reflective materialmdashsuch as ahand reaching out to grab a virtual object as furtherdescribed in Section 3 Fergasonrsquos dual retro-reflectivematerial constitutes an improvement in system illumina-tion However it also compromises the natural unidirec-tional occlusion inherently present in the Fisher HMPD

Early demonstrations of the HMPD technologybased on commercially available optics were firstdemonstrated by Kijima and Ojika (1997) and inde-pendently by Parsons and Rolland (1998) and (Rol-land Parsons Poizat amp Hancock 1998) Shortlyafter Tachi and his group first developed a configura-tion named MEDIA Xrsquotal which was not head-mounted yet used two projectors positioned similarlyto that in a HMPD together with retro-reflective ma-terial (Kawakami et al 1999) He then also appliedthe concept to a HMPD (Inami et al 2000)

The Holy Grail of HMD technology development islightweight ergonomic design and a fundamental ques-tion is whether projection optics inherently provides anadvantage Early in the development of the HMPDtechnology we envisioned a mobile lightweight multi-user system and therefore sought to develop light-weight compact optics A custom-designed 23 g per eyecompact projection optics using commercially off-the-shelf optical lenses was first designed by Poizat and Rol-land (1998) The design was supported by an analysis ofretro-reflective materials for imaging (Girardot amp Rol-land 1999) and a study of the engineering of HMPD(Hua Girardot Gao amp Rolland 2000) An ultralight-weight (ie less than 8 g per eye) 52deg diagonal FOVand color corrected projection optics was next designedusing a 135 in diagonal microdisplay and asphericaland diffractive optical elements (DOEs) (Hua amp Rol-land 2004) The design of a 70deg FOV followed using

the same microdisplay (Ha amp Rolland 2004) Next a42deg FOV projection optics was designed using a 06 indiagonal organic light emitting display (OLED) (Mar-tins Ha amp Rolland 2003)

Such design approaches based on DOEs not onlyprovide ultralightweight solutions but also importantlyprovide an elegant design approach in terms of chro-matic aberration correction One main insight into sucha correction comes from realizing that a DOE may bethought of as an optical grating that follows the laws ofoptical diffraction Consequently a DOE has a strongnegative chromatic dispersion that can be used to achro-matize refractive lenses A trade-off associated with us-ing DOEs is a potential small loss in efficiency whichcan affect the contrast of high-contrast targets Thus indesigning such systems careful quantification of DOEefficiency across the visible spectrum must be performedto ensure good image contrast and optimal performancefor the system specification

The steady progress toward engineering lightweightHMPDs is shown in Figure 1 Early in the design weconsidered both top- and side-mounted configurationsWe originally chose a top-mounted configuration asshown in Figure 1a for two reasons (1) To provide thebest possible alignment of the optics (2) To more easilymount the bulky electronics Upon a successful firstprototype in collaboration with NVIS Inc we had themain electronics board associated with the microdisplayplaced away from it in a control box and we investi-gated a side-mounted display driven by a medical train-ing application further described in Section 51 In thisapplication the trainees must bend over a human pa-tient simulator (HPS) and moving the packaging weightto the side was desired This approach led to theHMPD shown in Figure 1b We learned that such aconfiguration successfully moved weight to the sidehowever it also provided some tunnel vision as theuserrsquos peripheral vision was rather occluded by the side-mounted optics A recent design shown in Figure 1ccapitalized on ultracompact electronics associated withOLED microdisplays (eg httpwwwemagincom) To avoid the tunnel vision aspect which we foundvery disturbing we targeted a top-mounted geometry

Rolland et al 531

to allow full peripheral vision Also because of the ex-tremely compact electronics of the microdisplay in thiscase the overall system is less than 600 g including ca-bles and can further be lightened past this early proto-type level as metallic structures still remain Having in-

troduced the evolution of these systems we considerthe optics of these systems in greater detail in the fol-lowing section because the optical design is critical totheir future development making HMPDs light brightwide FOV wearable and mobile

Figure 1 Steady progress toward engineering lightweight HMPDs (a) 2001 HMPDampFaceCapture Optics mount Top-mounted microdisplay

AM-LCD Pixel resolution (640 3) 480 (b) 2003 HMPD Optics mount Side-mounted microdisplay AM-LCD Pixel resolution (640

3) 480 (c) 2004 HMPD Optics mount Top-mounted microdisplay OLED Pixel resolution (800 3) 600

532 PRESENCE VOLUME 14 NUMBER 5

3 Optics of the Head-Mounted ProjectionDisplay (HMPD) Making the DisplaysLight Bright Wide FOV Wearable andMobile

The optics of the HMPD consists essentially of(1) two microdisplays one associated with each eye and(2) associated projection optics that guides the pro-jected images toward retro-reflective material The opti-cal property of such material is to retro-reflect light to-ward its direction of origin or equivalently the eyes ofthe user in our optical configuration given that the po-sition of the eyes of the user are made conjugate to theposition of the exit pupils of the optics via a beam split-ter as shown in Figure 2b Therefore there are twounique optical components (1) projection optics ratherthan eyepiece optics as used in conventional HMDsand (2) a retro-reflective material strategically placed inthe environment rather than a diffusing screen as typi-cally used in projection-based systems distinguish theHMPD technology from conventional HMDs and ste-reoscopic projection displays such as the CAVE

31 Projection Optics vs EyepieceOptics Lighter Greater Field-of-Viewand Less Distortion

A comparison of HMDs based on eyepiece opticsversus the projection optics of a HMPD is shown inFigure 2ab An important feature of eyepiece optics in aHMD is the propagation of the light solely within theHMD between the microdisplay and the eye In thecase of projection optics in an HMPD the light actuallypropagates in the real environment up to the retro-reflective material before returning to the userrsquos eyesThe implication of this propagation scheme makes fora fundamental difference in the user experience of 3Dobjects and the environment With the HMPD it ispossible to have a user occlude virtual objects withreal objects interposed between the HMPD and thematerial such as the hand of a user reaching out to avirtual object in an artificial reality center (ARC) dis-play as shown in Figure 3

The usage of projection optics allows for larger FOV(ie 40deg diagonal) and less optical distortion (25

Figure 2 (a) Eyepiece optics HMD light emitted from the microdisplay reaches directly the eyes via the orientation of the

beam splitter (b) Projection optics HMD referred to as HMPD light is first sent to the environment before being retro-reflected

toward the eyes of the user a consequence of the orientation of the beam splitter and the positioning of retro-reflective material

in the environment

Rolland et al 533

at the corner of the FOV) than obtained with conven-tional eyepiece-based optical see-through HMDs for anequivalent weight At the foundation of this capability isthe location of the pupil both within the projection op-tics and after the beam splitter

In conventional HMDs using eyepiece optics onemay distinguish between pupil forming and non-pupilforming systems By pupil forming we mean that a dia-phragm (ie an aperture stop more generally called apupil that limits the light passing through the system) islocated within the optical system and is being reimagedat the eye location The eye location is referred to as theeyepoint because in computer graphics a pinhole modelis used to generate the stereoscopic image pairs In thecase of a pupil forming eyepiece an aperture stop is thuslocated within the optics however the final image ofthis stop after the eyepiece and beam splitter is real andlocated outside the optics at the chosen eyepoint forthe generation of the stereoscopic images (Rolland Ha

amp Fidopiastis 2004) The main drawback of an externalexit pupil is that as the FOV increases the optics in-creases in diameter and thus the weight increases as thecube of the diameter For the non-pupil forming eye-piece the pupil (ie the image of the eye iris throughthe cornea) of the eye itself constitutes the limiting ap-erture stop of the eyepiece However the same limita-tion occurs regarding the trade-off in FOV versusweight An advantage of non-pupil forming opticshowever is that the user may benefit from a larger eyebox where eye movements may more freely occur with-out vignetting of the image (ie vignetting refers topartial light loss or full loss of the image)

Projection optics is intrinsically a pupil forming opti-cal system In the case of projection optics an aperturestop is located within the projection lens by design andthe image of this aperture stop known as the exit pupilis virtual (ie it is located to the left of the last surfaceof the projection optics shown schematically in Figure2b) However given the orientation of the beam splitterat 90deg from that used with eyepiece optics the final im-age of the exit pupil after the beam splitter is coincidentwith the eyepoint of the eye as required In the case of avirtual pupil however as the FOV increases the opticssize remains almost invariant Furthermore in the caseof projection optics where the final pupil is virtual it isquite straightforward to design the optics with the pupillocated at or close to the nodal points of the lens (iemathematical first order constructs with unit angularmagnification) In such a case there will be little or nodistortion Correcting for distortion eliminates the needfor real-time distortion correction with software orhardware This property holds for HMPD design withrelatively large FOVs While optical correction is cur-rently readily available for various hardware solutionseliminating the need for distortion correction not onlyminimizes the cost of the system but also eliminatesprocessing Therefore there are no additional systemdelays Furthermore distortion-free optics avoids deteri-oration in image quality that can become more pro-nounced with increasing amounts of required correc-tion The fact that the microdisplay pixels remainsquare regardless of the level of predistortion compen-

Figure 3 Demonstration of occlusion capability of virtual objects by

real objects User reaches out to the trachea in the ARC display and

occludes part of the object with his hand A grayscale image is shown

here however the display is full color (Mandible and Trachea Visible

Human Dataset 3D Models Courtesy of Celina Imielinska Columbia

University)

534 PRESENCE VOLUME 14 NUMBER 5

sation of the rendered image may cause visual discom-fort if the pixels are resolvable as commonly encoun-tered in HMDs Thus distortion-free optics even todaywith high speed processing presents key advantages tomore natural perception in generally speaking HMDs

32 The Retro-Reflective ScreenAny Shape in Any Location

A key property of retro-reflective material is thatrays hitting the surface at any angle are reflected backonto the same incident optical path Thus theoreticallythe perception of the image is independent of the shapeand location of the screen made of such material Suchtechnology can thus be implemented with curved ortilted screens with no apparent distortion In practicedepending on the specifics of the retro-reflective mate-rial some dependence on the shape may be observedoutside a range of bending of the material Also imagequality (ie the sharpness of the image) is in practicelimited by the optical diffraction of the material (Mar-tins amp Rolland 2003) whose impact can be highly sig-nificant for large discrepancies in the location of the ma-terial with respect to the optical image projected by the

optics Thus ideally to minimize the effect of diffrac-tion the material must be placed in close proximity tothe images focused by the projector Furthermore de-pending on the specific properties of the material theamount of blurring quantified as the width of the pointspread function (PSF) may be more or less pronouncedfor the same discrepancy in the location of the projectedimages and the material as now quantified

An analysis of diffraction was conducted on two typesof material a micro-beaded type of material (ieScotchlite 3M Fabric Silver-beaded) and a microcorner-cube type of material (ie Scotchlite 3M FilmSilver-cubed) both shown on a microscopic scale inFigure 4b1 The analysis shown in Figure 4a quantifiesthe spread of light after retro-reflection due to diffrac-tion Measurements made in the laboratory and alsoreported in Figure 4b indicate a good correlation in theresults with the mathematical predictions From thisanalysis it is shown that the micro corner-cube materialwill be superior in maximizing the light throughput andminimizing loss in resolution due to diffraction In a

1 Scotchlite is a registered trademark of the 3M company

Figure 4 (a) Theoretical modeling of the point spread function (PSF) for two kinds of retro-reflective materials (b) 3D

(xyI) measured point spread functions a microscopic view of the two different types of materials is shown above the

PSFs as well as their basic 3D microstructure

Rolland et al 535

companion paper an analysis of human performance inresolving small details with both types of materials isreported (Fidopiastis Furhman Meyer amp Rolland2005)

Finally it is important to note that the FOV of theHMPD is invariant with the location of the material inthe environment and the apparent size of a 3D objectobtained from generating stereoscopic images will alsobe invariant with the location of the user with respect tothe material as long as the head of the user is tracked inposition and orientation as in any AR application

33 Optical Design of HMPDs

The optical design of any HMD including theHMPD is highly driven by the choice of the microdis-plays specifically their size resolution and means ofself-emitting light or requiring illumination optics Thesmaller the microdisplays the higher the required powerof the optics to achieve a given FOV and thus thehigher the number of elements required

The microdisplays and associated electronics firstavailable for this project were 135 in diagonal back-lighting color AM-LCDs (active matrix liquid crystaldisplays) with (640 3) 480 pixels and 42-m pixelsize While higher resolution would be preferred theavailability in size and color of this microdisplay weredeterminant for the choice made A 52deg FOV optics pereye with a 35-mm focal length optics was designed inorder to maintain a visual resolution of less than about 4arc minutes This choice resulted in a predicted 41 arcminutes per pixel in angular resolution horizontally andvertically (Hua Ha amp Rolland 2003) In spite of thelower resolution imposed by the microdisplay largerFOVs optics were explored For example a 70deg FOVoptics was investigated (Ha amp Rolland 2004) togetherwith a discussion of the properties of such optics (Rol-land Biocca et al 2004) Both designs were based onan ultralightweight four-element compact projectionoptics using a combination of DOEs plastic compo-nents and aspheric surfaces While plastic componentsare ideal to design an ultralight system its combinationwith glass components and DOEs also enables higher

image quality The total weight of each lens assemblywas only 6 g The mechanical dimensions of the 52degand 70deg FOVs optics were 20-mm in length by 18-mmin diameter and 15-mm in length by 134 mm in diame-ter respectively For both designs an analysis of perfor-mance determined that the polychromatic modulationtransfer functions displayed more than 40 contrast at25 line-pairsmm for a full size pupil of 12 mm Thedistortion was constrained to be less than 25 acrossthe overall visual fields in both cases

Finally whether the microdisplay is self-emitting orrequires illumination optics may impose additional con-straints on the design and compactness If the microdis-play acts as a mirror such as liquid crystal on silicon(LCOS) displays (Huang et al 2002) the projectionoptics diameter will be larger than the microdisplay toavoid vignetting the footprint of the telecentric lightbeam reflected off the LCOS as it passes through thelens Telecentric means that the central ray of the coneof light from each point in the field of view is parallel tothe optical axis before the LCOS display Furthermoresuch microdisplays require illumination optics and thusthey require additional optical and opto-mechanicalcomponents that will add weight complexity and costto the overall system Advantages of LCOS displays to-day however are their physical size (ie 1 in diago-nal) their brightness and their resolution

For each HMPD design (and this also applies toHMD design in general) the choice of the microdisplayis critical to the success of a final prototype or productand as such a choice must be application driven if theprototype or product are targeted at a real-world appli-cation

4 Design of HMPD Technologies forSpecific Applications The TeleportalHMDP (T-HMPD) and the Mobile HMPD(M-HMPD)

Nonverbal cues regarding what others are thinkingand where they are looking are key sources of informa-tion during real-time collaboration among workmates

536 PRESENCE VOLUME 14 NUMBER 5

especially when movements must be coordinated as incollaborative object manipulation The direction ofanotherrsquos gaze is a key source of information as to theotherrsquos visual attention For example it helps disambig-uate spatially ambiguous but common everyday phrasessuch as ldquoget the tool over thererdquo Facial expressions sup-plemented by language provide teammates with insightto the intentions moods and meanings communicatedby others

Video conferencing systems provide some informa-tion on visual expressions but fail to provide accuratecues of spatial attention and are poor at supportingphysical collaboration because collaborators lack a com-mon action space VR systems using HMPD displayscan better create a common action space and supportnaturalistic hand manipulation of 3D virtual objectsduring immersive collaboration but facial expressionsare partially occluded by the HMPD for local partici-pants and further masked for remote participants Akey challenge in immersive collaborative systems ishow to add the important information channels offacial expressions and visual attention into a distrib-uted AR interface An emerging version of theHMPD design tailored for face-to-face interactionand referred to as the teleportal HMPD (T-HMPD)will be described in Section 41

Another challenge in creating distributed collabora-tive environments is how to create mobile systems basedon HMPD technology given that collaboration mayalso take place as we navigate through a real environ-ment such as a museum or a city In such cases it is notpossible to position retro-reflective material strategicallyin the environment however it would be advantageousbased on the ultralightweight of the optics of HMPDsto expand the technology to mobile systems A mobileHMPD (M-HMPD) will be described in Section 42

41 The Teleportal HMPD (T-HMPD)

The T-HMPD integrates optical image process-ing and display technologies to capture transport andreconstruct an AR 3D model of the head and facial ex-

pression of a remote collaborator networked to a localpartner The T-HMPD is a hybrid optical and videosee-through HMD composed of an HMPD combinedwith a pair of lipstick video cameras and two miniaturemirrors mounted to the side and slightly forward of theface of the user as shown schematically in Figure 5(Biocca amp Rolland 2004) The configuration cap-tures stereoscopic video of the userrsquos face includingboth sides of the face without occlusion with mini-mal interference within the userrsquos visual field and onlyminimal occlusion of the face as viewed by otherphysical participants at the local site Unlike room-basedvideo the head-worn camera and mirror system cap-tures the full face of users no matter where they arelooking and regardless of their locations as long as thecameras are connected

Figure 6ab show the left and right views respec-tively of the lipstick cameras through the miniaturemirrors (ie 1 in in diameter in this case) in one of ourfirst tests The radius of curvature of the convex surfaceleading to the face capture shown was selected to be65-mm from applying basic optical imaging equationsbetween a small 4 mm focal length lipstick camera andthe face In the first implementation the lipstick videocameras were Sony Electronics DXCLS11 Adjustablerods were designed to mount the two mirrors in orderto experiment with various configurations of mirrorscamera lenses and distances from the face and the two

Figure 5 General layout of the minimally intrusive stereoscopic face

capture system

Rolland et al 537

mirrors Optimization of the optical face capture systemis under optimization for robustness minimization ofshadows on the face and placement of the mirror sideof the beam splitter to capture the face of any partici-

pant with no interference from the projector light or thebeam splitter Image processing algorithms under de-velopment at the MIND Lab (Media Interface and Net-work Design Lab) in collaboration with the ODA Lab

Figure 6 Face images captured using the FCS (a) left image and (b) right image (c) Virtual frontal view generated from two side view

cameras mounted in this case to a table as opposed to the head for feasibility study (Courtesy frontal view from C Reddy and G Stockman

Michigan State University)

538 PRESENCE VOLUME 14 NUMBER 5

(Optical Diagnostics and Applications Lab) unwrap thedistorted images from the cameras and produce a com-posite video texture from the two images (Reddy 2003Reddy Stockman Rolland amp Biocca 2004) A feasibil-ity of creating a frontal virtual view from two lipstickcameras mounted to the side and slightly forward of theface is shown in Figure 6c where the cameras in thiscase were mounted to a table The composite stereovideo texture can be sent directly via high bandwidthconnection or mapped to a 3D mesh of the userrsquos faceThe 3D face of the user can be projected in the localspace as a 3D AR object in the virtual environmentand placed in the analogous spatial relation to othersand virtual objects using the tracker data of the re-mote site Alternatively the teleportal 3D face-to-facemodel can be projected onto a head model covered inretro-reflective material or as a 3D video on the wallof an office or conference room as in traditional tele-conferencing systems As the algorithm for stereo facecapture and reconstruction matures we are preparingto test the algorithm in various presentation scenariosincluding a retro-reflective ball and head model andother embodiments of the remote others will be cre-ated In optimizing the presentation of informationto create the maximum sense of presence we mayinvestigate how to best display the remote faces Forexample we may employ a retro-reflective tabletopwhere 3D scientific visualization may be shared or aretro-reflective wall that would open a common win-dow to both distributed visualization and social envi-ronments

For local participants wearing HMPDs a participantwould see the eyes of the other participant illuminatedhowever the projector is too weak to shine light in theeyes of a side participant The illuminated eyes may beturned off automatically when two local participantsspeak face-to-face to each other (while their face is stillbeing captured) and the projector can be automaticallyturned back on when facing any retro-reflective mate-rial This feature has not yet been implemented Thereis no such issue for remote participants given how theface is captured

42 Mobile Head-Mounted ProjectionDisplay (M-HMPD)

In order to expand the use of HMPD to appli-cations that may not be able to allow placing of retro-reflective material in the environment a novel displaybased on the HMPD was recently conceived (RollandMartins amp Ha 2003) A schematic of the display isshown in Figure 7 The main novelty of the display liesin removing the fabric from the environment and solelyintegrating the retro-reflective material within theHMPD for imaging using additional optics between thebeam splitter and the material to optically image thematerial at a remote distance from the user A preferredlocation for the material is in coincidence with the mon-ocular virtual images of the HMPD to minimize theeffects of diffraction imposed by the microstructures ofthe optical material Without the additional optics weestimated that in the best case scenario visual acuitywould have been limited to about 10 minutes of arceven with a finer resolution of the microdisplay whichwould be visually unacceptable Thus when the materialwithin the HMPD is not used simply for increased illu-mination the imaging optics between the integratedmaterial and the beam splitter is absolutely required inorder to provide adequate overall resolution of theviewed images The additional optics is illustrated in

Figure 7 Conceptual design of a mobile HMPD (M-HMPD)

Rolland et al 539

Figure 7 with a Fresnel lens for low weight and com-pactness

Because the image formed by the projection optics isminimized at the location of the retro-reflective materialplaced within the M-HMPD a difference with the basicHMPD is that smaller microstructures are required (ieon the order of 10 m instead of 100 m) The de-tailed optical design of the lens is similar to that of thefirst ODA Lab HMPD optics except that the microdis-play chosen is a 06 in diagonal OLED and higher res-olution 800 600 pixels Details of the optical designwere reported elsewhere (Martins Ha amp Rolland2003) A challenge associated with the development ofthe M-HMPD is the design and fabrication of about10 m scale microstructure retro-reflective materialSuch microstructures are not commercially available andcustom-design materials are being investigated

Because of its stand-alone capability the M-HMPDextends the use of HMPDs to clinical guided surgerymedical simulation and training wearable computersmobile secure displays and distributed collaborative dis-plays as well as outdoor augmented see-through virtualenvironments The visual results of the M-HMPD com-pare in essence to that of eyepiece-based see-throughAR systems currently available The difference lieshowever in the higher image quality (ie lower blur) adistortion-free image and a more compact optics

5 A Review of CollaborativeApplications Medical Applicationsand Infospaces

The HMPD facilitates the development of collab-orative environments that allow seamless transitionsthrough different levels of immersion from AR to a fullvirtual reality experience (Milgram amp Kishino 1994Davis et al 2003) In the artificial reality center (ARC)users can navigate between various levels of immersionthat occur on the basis of where users position them-selves with respect to the retro-reflective material TheARC presented at ISMAR 2002 (Hamza-Lup et al2002) together with a remote collaboration applicationbuilt on top of DARE (distributed augmented realityenvironment) (Hamza-Lup Davis Hughes amp Rolland2002) consists of a generally curved or shaped retro-reflective wall an HMPD with miniature optics a com-mercially available optical tracking system and Linux-based PCs The ARCs may take different shapes andsizes and are importantly quickly deployable either in-doors or outdoors Since the year 2001 we built twodeployable (10 minute setup) displays 10-ft wide by7-ft high as well as a deployable (30 minute setup)15-ft diameter center networked to some of the otherARCs as shown in Figure 8 However all displays aim toprovide multiuser capability including remotely located

Figure 8 Networked open environments with artificial reality centers (NOEs ARCs)

540 PRESENCE VOLUME 14 NUMBER 5

users (Hamza-Lup amp Rolland 2004b) as well as 3Dmultisensory visualization including 3D sound hapticand smell (Rolland Stanney et al 2004) The userrsquosmotion becomes the computer interface device in con-trast to systems that may resort to various display de-vices (Billinghurst Kato amp Poupyrev 2001)

Variants of the HMPD optics targeted at specificapplications such as 3D medical visualization and dis-tributed AR medical training tools (Rolland Hamza-Lup et al 2003 Hamza-Lup Rolland amp Hughes2005) embedded training display technology for theArmyrsquos future combat vehicles (Rodrigez Foglia ampRolland 2003) 3D manufacturing design collabora-tion (Rolland Biocca et al 2004) have been devel-oped in our laboratory Also the HMPD in its origi-nal prototype form with a 52deg ultralightweight opticslies at the core of the Aztec Explorer application de-veloped at the 3DVIS Lab that investigates variousinteraction schemes within SCAPE (stereoscopic col-laboration in augmented and projective environ-ments) that consists of an integration of the HMPDtogether with the HiBall3000 head tracker by 3rdTech (www3rdtechcom) a SCAPE API developedin the 3DVIS Lab the CAVERN G2 API networkinglibrary (wwwopenchanelsoftwareorg) and a tracked5DT Dataglove (Hua Brown amp Gao 2004)

In summary the HMPD technology we have devel-oped currently provides a fine balance of affordabilityand unique capabilities such as (1) spanning the virtual-ity continuum allowing both full immersion and mixedreality which may open a set of new capabilities acrossvarious applications (2) enabling teleportal capabilitywith face-to-face interaction (3) creating ultralight-weight wide FOVs mobile and secure displays (4) creat-ing small low cost desktop technology or on largerscale quickly deployable 3D visualization centers or dis-plays

51 Medical Visualization andAugmented Reality Medical TrainingTools

In the ARCs multiple users may interact on thevisualization of 3D medical models as shown in Figure9 or practice procedures on AR human patient simula-tors (HPS) with a teaching module referred to as theultimate intubation head (UIH) as shown in Figure 10which we shall now further detail Because as detailedearlier in the paper the light projected from one userreturns only to that user there is no cross talk betweenusers and thus multiple users can coexist with no over-lapping graphics in the ARCs

Figure 9 Concept of multiusers interacting in the ARC Figure 10 Illustration of the endotracheal intubation training tool

Rolland et al 541

The UIH is a training tool in development in theODA Lab for endotracheal intubation (ETI) based onHMPD and ARC technology (Rolland Hamza-Lup etal 2003) (Hamza-Lup Rolland amp Hughes 2005) Itis aimed at medical students residents physician assis-tants pre-hospital care personnel nurse-anesthetistsexperienced physicians and any medical personnel whoneed to perform this common but critical procedure ina safe and rapid sequence

The system trains a wide range of clinicians in safelysecuring the airway during cardiopulmonary resuscita-tion as ensuring immediate ventilation andor oxygen-ation is critical for a number of reasons Firstly ETIwhich consists of inserting an endotracheal tubethrough the mouth into the trachea and then sealingthe trachea so that all air passes through the tube is alifesaving procedure Secondly the need for ETI canoccur in many places in and out of the hospital Per-haps the most important reason for training clinicians inETI however is the inherent difficulty associated withthe procedure (American Heart Association 1992Walls Barton amp McAfee 1999)

Current teaching methods lack flexibility in more

than one sense The most widely used model is a plasticor latex mannequin commonly used to teach advancedcardiac life support (ACLS) techniques including air-way management The neck and oropharynx are usuallydifficult to manipulate without inadvertently ldquobreakingrdquothe modelrsquos teeth or ldquodislocatingrdquo the cervical spinebecause of the awkward hand motions required A rela-tively recent development is the HPS a mannequin-based simulator The HPS is similar to the existingACLS models but the neck and airway are often moreflexible and lifelike and can be made to deform and re-lax to simulate real scenarios The HPS can simulateheart and lung sounds and provide palpable pulses aswell as realistic chest movement The simulator is inter-active but requires real-time programming and feed-back from an instructor (Murray amp Schneider 1997)Utilizing a HPS combined with 3D AR visualization ofthe airway anatomy and the endotracheal tube para-medics will be able to obtain a visual and tactile sense ofproper ETI The UIH will allow paramedics to practicetheir skills and provide them with the visual feedbackthey could not obtain otherwise

Intubation on the HPS is shown in Figure 11a The

Figure 11 (a) A user training on an intubation procedure (b) Lung model superimposed on the HPS using AR (3D model of the Visible

Human Dataset lung Courtesy of Celina Imielinska Columbia University)

542 PRESENCE VOLUME 14 NUMBER 5

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 2: Development of Head-Mounted - University of Central Florida

training effective face-to-face distributed business nego-tiation and conferencing distance education and medi-ated governmental services The displays and interfacesare designed to support mobile AR navigation telepres-ence and advanced information systems which we referto as mobile infospaces for applications such as medicaland disaster relief scenarios

The AR systems introduced below aim to create acompelling sense of fully embodied interaction withspaces and people that are not immediately present inthe same physical environment The acceptance andefficiency of distributed collaborative systems mayrequire advanced media technologies that can create astrong sense of social presence defined as the sense ofldquobeing with othersrdquo (Short Williams amp Christie1976 Biocca Harms amp Burgoon 2003) Partici-pants may need to feel that people who are not in thesame room or place have a similar impact on teamprocesses as those who are physically present To cre-ate this effect our research efforts include the devel-opment of technologies that enable both 3D datasetvisualization (Hamza-Lup Davis amp Rolland 2002)together with 3D face-to-face interaction among dis-tributed users (Biocca amp Rolland 2004) and algo-rithms to synchronize shared states across distributed3D collaborative environments (Hamza-Lup amp Rol-land 2004a 2004b) To verify the effect we havedeveloped ways to measure the degree to which team-mates feel socially present with remote collaboratorsBecause there are many acronyms in this paper weprovide a list in Table 1 to assist the reader

In this paper we first discuss the relative advan-tages of head-mounted projection displays (HMPDs)for distributed 3D collaborative AR systems We thenreview differences between eyepiece and projectionoptics and discuss various trade-offs associated withthe technology The exploration of various HMPDdesigns within our laboratories is presented as wellas two novel emerging technologies based on theHMPD the teleportal HMPD (T-HMPD) and themobile HMPD (M-HMPD) Finally a review oftwo collaborative applications medical visualizationand training and the design of infospaces is pro-vided

2 Relative Advantages of Head-MountedProjection Displays for CollaborativeInteractions

Few technologies are well designed to supportdistributed work team interactions with complex 3Dmodels This is one key motivator for the creation ofadvanced image capture and display systems 3D visual-ization devices which have succeeded in penetratingreal world markets have evolved into three formatsstandard monitorsshutter glasses head-mounted dis-plays (HMDs) (Sutherland 1968) and projection-based displays Projection-based displays include video-based virtual environments that use back-projectiontechniques to place users within the environment(Krueger 1977 1985) and rear projection cubes knownas the CAVE (Cruz-Neira Sandin amp DeFanti 1993)

Each of these three common approaches currentlyimposes a significant increase in cost andor limits thequality of social interaction as well as the sense of social

Table 1 List of Acronyms Employed in this Paper

Acronym Full phrase

3D Three-dimensionalACLS Advanced cardiac life supportAR Augmented realityARC Artificial reality centersDOE Diffractive optical elementETI Endotracheal intubationFOV Field of viewHMD Head-mounted displayHMPD Head-mounted projection displayHPS Human patient simulatorLCOS Liquid crystal on siliconM-HMPD Mobile head-mounted projection

displayOLED Organic light emitting displayT-HMPD Teleportal head-mounted projection

displayUIH Ultimate intubation head

Rolland et al 529

presence of distributed team members Monitors withshutter glasses are limited in teamwork capabilities be-cause of the size of the display space and the lack of im-mersion Among HMDs AR displays that rely on opti-cal see-through capability (Rolland amp Fuchs 2001) arebest suited for collaborative interfaces because they en-able the possibility of undistorted 3D visualization of3D models while they also preserve the natural interac-tions multiple users may have within one collaborativesite They are weak however at providing natural inter-actions among multiple users located at remote sitesgiven that they do not support face-to-face interactionsamong distributed users Also unless ultralight weight(ie 10 g) optics can be designed with a reasonableFOV (ie at least 40deg) HMDs will limit usability re-gardless of their type Technologies such as the CAVEand Powerwalls have been especially conceived for thedevelopment of collaborative environments(httppinotlcseumneduresearchpowerwallpowerwallhtml) The strength of these technologies liesin the vivid sense of immersion they provide for multi-ple users However these technologies are also weak atsupporting social interaction for local teams workingwith the models because only one user can view the 3Dmodels accurately without perceptual distortions leav-ing other members of the work team viewing distortedmodels Furthermore the technology does not supportface-to-face interactions among distributed users CAVEtechnology is also typically costly because of the needfor multiple large projectors and they require a largefootprint because of the rear projection on largescreens

An HMPD may be conceptually thought of as a pairof miniature projectors one for each eye mounted onthe head coupled with retro-reflective material strategi-cally placed in the environment Retro-reflective mate-rial has the property that ideally any ray of light hittingthe material is redirected along its incident direction asopposed to being diffused as common to conventionalprojection systems Naturally such an imaging schemeraises two main questions (1) How can a projector beminiaturized to the extent that it can be mounted on

the head and (2) How can retro-reflective material pro-vide imaging capability

Insight into these questions may be reached by com-paring the HMPD imaging capability to existing tech-nology Consider first conventional projector systemssuch as those typically found in conference rooms Thelight projected on the screen reflects and diffuses in alldirections allowing multiple viewers in the room to col-lect a small portion of the light diffused back towardsthem This small amount of diffused light enables all tosee the projected images Such projection systems re-quire extremely high power illumination and thus theirsize They may require dimmed light in the room sothat the diffusing screen is not washed out by ambientlight

Now consider how HMPD uses light The light fromthe head-worn projectors is projected towards opticalretro-reflective screens The light is then redirected in asmall cone back towards the source after reaching theoptical retro-reflective material This maximizes thepower received by the user whose eye (ie the right eyefor the right-eye projected image) is located within thepath of the small cone of reflected light (We shall ex-plain the technology in further detail in Section 3)Therefore in spite of the low power projection systemmounted on the head bright images can be perceivedby not only one user but each user of the screen as theywill not be sharing the same reflected light Outside thepath of reflection no light will be received because thecone of light along the path is small enough to be im-perceptible by the other eye of the user Therefore nolight leakage or cross talk is possible either between theeyes of one user or between users Because there is nocross talk such technology interestingly can also allowfor private or secure information to be viewed in a pub-lic work setting

The HMPD designs have evolved significantly sincethey were conceived Fisher (1996) pioneered the con-cept of combining a projector mounted on the headwith retro-reflective material strategically placed in theenvironment The system proposed was biocular withone microdisplay architecture and the same image waspresented to both eyes Fergason (1997) extended the

530 PRESENCE VOLUME 14 NUMBER 5

conceptual design to binocular displays He also made aconceptual improvement to the technology consistingof adding retro-reflective material located at 90 degreesfrom the material placed strategically in the environ-ment to increase light throughput It is important tonote that a key benefit of the HMPD in the originalFisher configuration is providing natural occlusion ofvirtual objects by real objects interposed between theHMPD and the retro-reflective materialmdashsuch as ahand reaching out to grab a virtual object as furtherdescribed in Section 3 Fergasonrsquos dual retro-reflectivematerial constitutes an improvement in system illumina-tion However it also compromises the natural unidirec-tional occlusion inherently present in the Fisher HMPD

Early demonstrations of the HMPD technologybased on commercially available optics were firstdemonstrated by Kijima and Ojika (1997) and inde-pendently by Parsons and Rolland (1998) and (Rol-land Parsons Poizat amp Hancock 1998) Shortlyafter Tachi and his group first developed a configura-tion named MEDIA Xrsquotal which was not head-mounted yet used two projectors positioned similarlyto that in a HMPD together with retro-reflective ma-terial (Kawakami et al 1999) He then also appliedthe concept to a HMPD (Inami et al 2000)

The Holy Grail of HMD technology development islightweight ergonomic design and a fundamental ques-tion is whether projection optics inherently provides anadvantage Early in the development of the HMPDtechnology we envisioned a mobile lightweight multi-user system and therefore sought to develop light-weight compact optics A custom-designed 23 g per eyecompact projection optics using commercially off-the-shelf optical lenses was first designed by Poizat and Rol-land (1998) The design was supported by an analysis ofretro-reflective materials for imaging (Girardot amp Rol-land 1999) and a study of the engineering of HMPD(Hua Girardot Gao amp Rolland 2000) An ultralight-weight (ie less than 8 g per eye) 52deg diagonal FOVand color corrected projection optics was next designedusing a 135 in diagonal microdisplay and asphericaland diffractive optical elements (DOEs) (Hua amp Rol-land 2004) The design of a 70deg FOV followed using

the same microdisplay (Ha amp Rolland 2004) Next a42deg FOV projection optics was designed using a 06 indiagonal organic light emitting display (OLED) (Mar-tins Ha amp Rolland 2003)

Such design approaches based on DOEs not onlyprovide ultralightweight solutions but also importantlyprovide an elegant design approach in terms of chro-matic aberration correction One main insight into sucha correction comes from realizing that a DOE may bethought of as an optical grating that follows the laws ofoptical diffraction Consequently a DOE has a strongnegative chromatic dispersion that can be used to achro-matize refractive lenses A trade-off associated with us-ing DOEs is a potential small loss in efficiency whichcan affect the contrast of high-contrast targets Thus indesigning such systems careful quantification of DOEefficiency across the visible spectrum must be performedto ensure good image contrast and optimal performancefor the system specification

The steady progress toward engineering lightweightHMPDs is shown in Figure 1 Early in the design weconsidered both top- and side-mounted configurationsWe originally chose a top-mounted configuration asshown in Figure 1a for two reasons (1) To provide thebest possible alignment of the optics (2) To more easilymount the bulky electronics Upon a successful firstprototype in collaboration with NVIS Inc we had themain electronics board associated with the microdisplayplaced away from it in a control box and we investi-gated a side-mounted display driven by a medical train-ing application further described in Section 51 In thisapplication the trainees must bend over a human pa-tient simulator (HPS) and moving the packaging weightto the side was desired This approach led to theHMPD shown in Figure 1b We learned that such aconfiguration successfully moved weight to the sidehowever it also provided some tunnel vision as theuserrsquos peripheral vision was rather occluded by the side-mounted optics A recent design shown in Figure 1ccapitalized on ultracompact electronics associated withOLED microdisplays (eg httpwwwemagincom) To avoid the tunnel vision aspect which we foundvery disturbing we targeted a top-mounted geometry

Rolland et al 531

to allow full peripheral vision Also because of the ex-tremely compact electronics of the microdisplay in thiscase the overall system is less than 600 g including ca-bles and can further be lightened past this early proto-type level as metallic structures still remain Having in-

troduced the evolution of these systems we considerthe optics of these systems in greater detail in the fol-lowing section because the optical design is critical totheir future development making HMPDs light brightwide FOV wearable and mobile

Figure 1 Steady progress toward engineering lightweight HMPDs (a) 2001 HMPDampFaceCapture Optics mount Top-mounted microdisplay

AM-LCD Pixel resolution (640 3) 480 (b) 2003 HMPD Optics mount Side-mounted microdisplay AM-LCD Pixel resolution (640

3) 480 (c) 2004 HMPD Optics mount Top-mounted microdisplay OLED Pixel resolution (800 3) 600

532 PRESENCE VOLUME 14 NUMBER 5

3 Optics of the Head-Mounted ProjectionDisplay (HMPD) Making the DisplaysLight Bright Wide FOV Wearable andMobile

The optics of the HMPD consists essentially of(1) two microdisplays one associated with each eye and(2) associated projection optics that guides the pro-jected images toward retro-reflective material The opti-cal property of such material is to retro-reflect light to-ward its direction of origin or equivalently the eyes ofthe user in our optical configuration given that the po-sition of the eyes of the user are made conjugate to theposition of the exit pupils of the optics via a beam split-ter as shown in Figure 2b Therefore there are twounique optical components (1) projection optics ratherthan eyepiece optics as used in conventional HMDsand (2) a retro-reflective material strategically placed inthe environment rather than a diffusing screen as typi-cally used in projection-based systems distinguish theHMPD technology from conventional HMDs and ste-reoscopic projection displays such as the CAVE

31 Projection Optics vs EyepieceOptics Lighter Greater Field-of-Viewand Less Distortion

A comparison of HMDs based on eyepiece opticsversus the projection optics of a HMPD is shown inFigure 2ab An important feature of eyepiece optics in aHMD is the propagation of the light solely within theHMD between the microdisplay and the eye In thecase of projection optics in an HMPD the light actuallypropagates in the real environment up to the retro-reflective material before returning to the userrsquos eyesThe implication of this propagation scheme makes fora fundamental difference in the user experience of 3Dobjects and the environment With the HMPD it ispossible to have a user occlude virtual objects withreal objects interposed between the HMPD and thematerial such as the hand of a user reaching out to avirtual object in an artificial reality center (ARC) dis-play as shown in Figure 3

The usage of projection optics allows for larger FOV(ie 40deg diagonal) and less optical distortion (25

Figure 2 (a) Eyepiece optics HMD light emitted from the microdisplay reaches directly the eyes via the orientation of the

beam splitter (b) Projection optics HMD referred to as HMPD light is first sent to the environment before being retro-reflected

toward the eyes of the user a consequence of the orientation of the beam splitter and the positioning of retro-reflective material

in the environment

Rolland et al 533

at the corner of the FOV) than obtained with conven-tional eyepiece-based optical see-through HMDs for anequivalent weight At the foundation of this capability isthe location of the pupil both within the projection op-tics and after the beam splitter

In conventional HMDs using eyepiece optics onemay distinguish between pupil forming and non-pupilforming systems By pupil forming we mean that a dia-phragm (ie an aperture stop more generally called apupil that limits the light passing through the system) islocated within the optical system and is being reimagedat the eye location The eye location is referred to as theeyepoint because in computer graphics a pinhole modelis used to generate the stereoscopic image pairs In thecase of a pupil forming eyepiece an aperture stop is thuslocated within the optics however the final image ofthis stop after the eyepiece and beam splitter is real andlocated outside the optics at the chosen eyepoint forthe generation of the stereoscopic images (Rolland Ha

amp Fidopiastis 2004) The main drawback of an externalexit pupil is that as the FOV increases the optics in-creases in diameter and thus the weight increases as thecube of the diameter For the non-pupil forming eye-piece the pupil (ie the image of the eye iris throughthe cornea) of the eye itself constitutes the limiting ap-erture stop of the eyepiece However the same limita-tion occurs regarding the trade-off in FOV versusweight An advantage of non-pupil forming opticshowever is that the user may benefit from a larger eyebox where eye movements may more freely occur with-out vignetting of the image (ie vignetting refers topartial light loss or full loss of the image)

Projection optics is intrinsically a pupil forming opti-cal system In the case of projection optics an aperturestop is located within the projection lens by design andthe image of this aperture stop known as the exit pupilis virtual (ie it is located to the left of the last surfaceof the projection optics shown schematically in Figure2b) However given the orientation of the beam splitterat 90deg from that used with eyepiece optics the final im-age of the exit pupil after the beam splitter is coincidentwith the eyepoint of the eye as required In the case of avirtual pupil however as the FOV increases the opticssize remains almost invariant Furthermore in the caseof projection optics where the final pupil is virtual it isquite straightforward to design the optics with the pupillocated at or close to the nodal points of the lens (iemathematical first order constructs with unit angularmagnification) In such a case there will be little or nodistortion Correcting for distortion eliminates the needfor real-time distortion correction with software orhardware This property holds for HMPD design withrelatively large FOVs While optical correction is cur-rently readily available for various hardware solutionseliminating the need for distortion correction not onlyminimizes the cost of the system but also eliminatesprocessing Therefore there are no additional systemdelays Furthermore distortion-free optics avoids deteri-oration in image quality that can become more pro-nounced with increasing amounts of required correc-tion The fact that the microdisplay pixels remainsquare regardless of the level of predistortion compen-

Figure 3 Demonstration of occlusion capability of virtual objects by

real objects User reaches out to the trachea in the ARC display and

occludes part of the object with his hand A grayscale image is shown

here however the display is full color (Mandible and Trachea Visible

Human Dataset 3D Models Courtesy of Celina Imielinska Columbia

University)

534 PRESENCE VOLUME 14 NUMBER 5

sation of the rendered image may cause visual discom-fort if the pixels are resolvable as commonly encoun-tered in HMDs Thus distortion-free optics even todaywith high speed processing presents key advantages tomore natural perception in generally speaking HMDs

32 The Retro-Reflective ScreenAny Shape in Any Location

A key property of retro-reflective material is thatrays hitting the surface at any angle are reflected backonto the same incident optical path Thus theoreticallythe perception of the image is independent of the shapeand location of the screen made of such material Suchtechnology can thus be implemented with curved ortilted screens with no apparent distortion In practicedepending on the specifics of the retro-reflective mate-rial some dependence on the shape may be observedoutside a range of bending of the material Also imagequality (ie the sharpness of the image) is in practicelimited by the optical diffraction of the material (Mar-tins amp Rolland 2003) whose impact can be highly sig-nificant for large discrepancies in the location of the ma-terial with respect to the optical image projected by the

optics Thus ideally to minimize the effect of diffrac-tion the material must be placed in close proximity tothe images focused by the projector Furthermore de-pending on the specific properties of the material theamount of blurring quantified as the width of the pointspread function (PSF) may be more or less pronouncedfor the same discrepancy in the location of the projectedimages and the material as now quantified

An analysis of diffraction was conducted on two typesof material a micro-beaded type of material (ieScotchlite 3M Fabric Silver-beaded) and a microcorner-cube type of material (ie Scotchlite 3M FilmSilver-cubed) both shown on a microscopic scale inFigure 4b1 The analysis shown in Figure 4a quantifiesthe spread of light after retro-reflection due to diffrac-tion Measurements made in the laboratory and alsoreported in Figure 4b indicate a good correlation in theresults with the mathematical predictions From thisanalysis it is shown that the micro corner-cube materialwill be superior in maximizing the light throughput andminimizing loss in resolution due to diffraction In a

1 Scotchlite is a registered trademark of the 3M company

Figure 4 (a) Theoretical modeling of the point spread function (PSF) for two kinds of retro-reflective materials (b) 3D

(xyI) measured point spread functions a microscopic view of the two different types of materials is shown above the

PSFs as well as their basic 3D microstructure

Rolland et al 535

companion paper an analysis of human performance inresolving small details with both types of materials isreported (Fidopiastis Furhman Meyer amp Rolland2005)

Finally it is important to note that the FOV of theHMPD is invariant with the location of the material inthe environment and the apparent size of a 3D objectobtained from generating stereoscopic images will alsobe invariant with the location of the user with respect tothe material as long as the head of the user is tracked inposition and orientation as in any AR application

33 Optical Design of HMPDs

The optical design of any HMD including theHMPD is highly driven by the choice of the microdis-plays specifically their size resolution and means ofself-emitting light or requiring illumination optics Thesmaller the microdisplays the higher the required powerof the optics to achieve a given FOV and thus thehigher the number of elements required

The microdisplays and associated electronics firstavailable for this project were 135 in diagonal back-lighting color AM-LCDs (active matrix liquid crystaldisplays) with (640 3) 480 pixels and 42-m pixelsize While higher resolution would be preferred theavailability in size and color of this microdisplay weredeterminant for the choice made A 52deg FOV optics pereye with a 35-mm focal length optics was designed inorder to maintain a visual resolution of less than about 4arc minutes This choice resulted in a predicted 41 arcminutes per pixel in angular resolution horizontally andvertically (Hua Ha amp Rolland 2003) In spite of thelower resolution imposed by the microdisplay largerFOVs optics were explored For example a 70deg FOVoptics was investigated (Ha amp Rolland 2004) togetherwith a discussion of the properties of such optics (Rol-land Biocca et al 2004) Both designs were based onan ultralightweight four-element compact projectionoptics using a combination of DOEs plastic compo-nents and aspheric surfaces While plastic componentsare ideal to design an ultralight system its combinationwith glass components and DOEs also enables higher

image quality The total weight of each lens assemblywas only 6 g The mechanical dimensions of the 52degand 70deg FOVs optics were 20-mm in length by 18-mmin diameter and 15-mm in length by 134 mm in diame-ter respectively For both designs an analysis of perfor-mance determined that the polychromatic modulationtransfer functions displayed more than 40 contrast at25 line-pairsmm for a full size pupil of 12 mm Thedistortion was constrained to be less than 25 acrossthe overall visual fields in both cases

Finally whether the microdisplay is self-emitting orrequires illumination optics may impose additional con-straints on the design and compactness If the microdis-play acts as a mirror such as liquid crystal on silicon(LCOS) displays (Huang et al 2002) the projectionoptics diameter will be larger than the microdisplay toavoid vignetting the footprint of the telecentric lightbeam reflected off the LCOS as it passes through thelens Telecentric means that the central ray of the coneof light from each point in the field of view is parallel tothe optical axis before the LCOS display Furthermoresuch microdisplays require illumination optics and thusthey require additional optical and opto-mechanicalcomponents that will add weight complexity and costto the overall system Advantages of LCOS displays to-day however are their physical size (ie 1 in diago-nal) their brightness and their resolution

For each HMPD design (and this also applies toHMD design in general) the choice of the microdisplayis critical to the success of a final prototype or productand as such a choice must be application driven if theprototype or product are targeted at a real-world appli-cation

4 Design of HMPD Technologies forSpecific Applications The TeleportalHMDP (T-HMPD) and the Mobile HMPD(M-HMPD)

Nonverbal cues regarding what others are thinkingand where they are looking are key sources of informa-tion during real-time collaboration among workmates

536 PRESENCE VOLUME 14 NUMBER 5

especially when movements must be coordinated as incollaborative object manipulation The direction ofanotherrsquos gaze is a key source of information as to theotherrsquos visual attention For example it helps disambig-uate spatially ambiguous but common everyday phrasessuch as ldquoget the tool over thererdquo Facial expressions sup-plemented by language provide teammates with insightto the intentions moods and meanings communicatedby others

Video conferencing systems provide some informa-tion on visual expressions but fail to provide accuratecues of spatial attention and are poor at supportingphysical collaboration because collaborators lack a com-mon action space VR systems using HMPD displayscan better create a common action space and supportnaturalistic hand manipulation of 3D virtual objectsduring immersive collaboration but facial expressionsare partially occluded by the HMPD for local partici-pants and further masked for remote participants Akey challenge in immersive collaborative systems ishow to add the important information channels offacial expressions and visual attention into a distrib-uted AR interface An emerging version of theHMPD design tailored for face-to-face interactionand referred to as the teleportal HMPD (T-HMPD)will be described in Section 41

Another challenge in creating distributed collabora-tive environments is how to create mobile systems basedon HMPD technology given that collaboration mayalso take place as we navigate through a real environ-ment such as a museum or a city In such cases it is notpossible to position retro-reflective material strategicallyin the environment however it would be advantageousbased on the ultralightweight of the optics of HMPDsto expand the technology to mobile systems A mobileHMPD (M-HMPD) will be described in Section 42

41 The Teleportal HMPD (T-HMPD)

The T-HMPD integrates optical image process-ing and display technologies to capture transport andreconstruct an AR 3D model of the head and facial ex-

pression of a remote collaborator networked to a localpartner The T-HMPD is a hybrid optical and videosee-through HMD composed of an HMPD combinedwith a pair of lipstick video cameras and two miniaturemirrors mounted to the side and slightly forward of theface of the user as shown schematically in Figure 5(Biocca amp Rolland 2004) The configuration cap-tures stereoscopic video of the userrsquos face includingboth sides of the face without occlusion with mini-mal interference within the userrsquos visual field and onlyminimal occlusion of the face as viewed by otherphysical participants at the local site Unlike room-basedvideo the head-worn camera and mirror system cap-tures the full face of users no matter where they arelooking and regardless of their locations as long as thecameras are connected

Figure 6ab show the left and right views respec-tively of the lipstick cameras through the miniaturemirrors (ie 1 in in diameter in this case) in one of ourfirst tests The radius of curvature of the convex surfaceleading to the face capture shown was selected to be65-mm from applying basic optical imaging equationsbetween a small 4 mm focal length lipstick camera andthe face In the first implementation the lipstick videocameras were Sony Electronics DXCLS11 Adjustablerods were designed to mount the two mirrors in orderto experiment with various configurations of mirrorscamera lenses and distances from the face and the two

Figure 5 General layout of the minimally intrusive stereoscopic face

capture system

Rolland et al 537

mirrors Optimization of the optical face capture systemis under optimization for robustness minimization ofshadows on the face and placement of the mirror sideof the beam splitter to capture the face of any partici-

pant with no interference from the projector light or thebeam splitter Image processing algorithms under de-velopment at the MIND Lab (Media Interface and Net-work Design Lab) in collaboration with the ODA Lab

Figure 6 Face images captured using the FCS (a) left image and (b) right image (c) Virtual frontal view generated from two side view

cameras mounted in this case to a table as opposed to the head for feasibility study (Courtesy frontal view from C Reddy and G Stockman

Michigan State University)

538 PRESENCE VOLUME 14 NUMBER 5

(Optical Diagnostics and Applications Lab) unwrap thedistorted images from the cameras and produce a com-posite video texture from the two images (Reddy 2003Reddy Stockman Rolland amp Biocca 2004) A feasibil-ity of creating a frontal virtual view from two lipstickcameras mounted to the side and slightly forward of theface is shown in Figure 6c where the cameras in thiscase were mounted to a table The composite stereovideo texture can be sent directly via high bandwidthconnection or mapped to a 3D mesh of the userrsquos faceThe 3D face of the user can be projected in the localspace as a 3D AR object in the virtual environmentand placed in the analogous spatial relation to othersand virtual objects using the tracker data of the re-mote site Alternatively the teleportal 3D face-to-facemodel can be projected onto a head model covered inretro-reflective material or as a 3D video on the wallof an office or conference room as in traditional tele-conferencing systems As the algorithm for stereo facecapture and reconstruction matures we are preparingto test the algorithm in various presentation scenariosincluding a retro-reflective ball and head model andother embodiments of the remote others will be cre-ated In optimizing the presentation of informationto create the maximum sense of presence we mayinvestigate how to best display the remote faces Forexample we may employ a retro-reflective tabletopwhere 3D scientific visualization may be shared or aretro-reflective wall that would open a common win-dow to both distributed visualization and social envi-ronments

For local participants wearing HMPDs a participantwould see the eyes of the other participant illuminatedhowever the projector is too weak to shine light in theeyes of a side participant The illuminated eyes may beturned off automatically when two local participantsspeak face-to-face to each other (while their face is stillbeing captured) and the projector can be automaticallyturned back on when facing any retro-reflective mate-rial This feature has not yet been implemented Thereis no such issue for remote participants given how theface is captured

42 Mobile Head-Mounted ProjectionDisplay (M-HMPD)

In order to expand the use of HMPD to appli-cations that may not be able to allow placing of retro-reflective material in the environment a novel displaybased on the HMPD was recently conceived (RollandMartins amp Ha 2003) A schematic of the display isshown in Figure 7 The main novelty of the display liesin removing the fabric from the environment and solelyintegrating the retro-reflective material within theHMPD for imaging using additional optics between thebeam splitter and the material to optically image thematerial at a remote distance from the user A preferredlocation for the material is in coincidence with the mon-ocular virtual images of the HMPD to minimize theeffects of diffraction imposed by the microstructures ofthe optical material Without the additional optics weestimated that in the best case scenario visual acuitywould have been limited to about 10 minutes of arceven with a finer resolution of the microdisplay whichwould be visually unacceptable Thus when the materialwithin the HMPD is not used simply for increased illu-mination the imaging optics between the integratedmaterial and the beam splitter is absolutely required inorder to provide adequate overall resolution of theviewed images The additional optics is illustrated in

Figure 7 Conceptual design of a mobile HMPD (M-HMPD)

Rolland et al 539

Figure 7 with a Fresnel lens for low weight and com-pactness

Because the image formed by the projection optics isminimized at the location of the retro-reflective materialplaced within the M-HMPD a difference with the basicHMPD is that smaller microstructures are required (ieon the order of 10 m instead of 100 m) The de-tailed optical design of the lens is similar to that of thefirst ODA Lab HMPD optics except that the microdis-play chosen is a 06 in diagonal OLED and higher res-olution 800 600 pixels Details of the optical designwere reported elsewhere (Martins Ha amp Rolland2003) A challenge associated with the development ofthe M-HMPD is the design and fabrication of about10 m scale microstructure retro-reflective materialSuch microstructures are not commercially available andcustom-design materials are being investigated

Because of its stand-alone capability the M-HMPDextends the use of HMPDs to clinical guided surgerymedical simulation and training wearable computersmobile secure displays and distributed collaborative dis-plays as well as outdoor augmented see-through virtualenvironments The visual results of the M-HMPD com-pare in essence to that of eyepiece-based see-throughAR systems currently available The difference lieshowever in the higher image quality (ie lower blur) adistortion-free image and a more compact optics

5 A Review of CollaborativeApplications Medical Applicationsand Infospaces

The HMPD facilitates the development of collab-orative environments that allow seamless transitionsthrough different levels of immersion from AR to a fullvirtual reality experience (Milgram amp Kishino 1994Davis et al 2003) In the artificial reality center (ARC)users can navigate between various levels of immersionthat occur on the basis of where users position them-selves with respect to the retro-reflective material TheARC presented at ISMAR 2002 (Hamza-Lup et al2002) together with a remote collaboration applicationbuilt on top of DARE (distributed augmented realityenvironment) (Hamza-Lup Davis Hughes amp Rolland2002) consists of a generally curved or shaped retro-reflective wall an HMPD with miniature optics a com-mercially available optical tracking system and Linux-based PCs The ARCs may take different shapes andsizes and are importantly quickly deployable either in-doors or outdoors Since the year 2001 we built twodeployable (10 minute setup) displays 10-ft wide by7-ft high as well as a deployable (30 minute setup)15-ft diameter center networked to some of the otherARCs as shown in Figure 8 However all displays aim toprovide multiuser capability including remotely located

Figure 8 Networked open environments with artificial reality centers (NOEs ARCs)

540 PRESENCE VOLUME 14 NUMBER 5

users (Hamza-Lup amp Rolland 2004b) as well as 3Dmultisensory visualization including 3D sound hapticand smell (Rolland Stanney et al 2004) The userrsquosmotion becomes the computer interface device in con-trast to systems that may resort to various display de-vices (Billinghurst Kato amp Poupyrev 2001)

Variants of the HMPD optics targeted at specificapplications such as 3D medical visualization and dis-tributed AR medical training tools (Rolland Hamza-Lup et al 2003 Hamza-Lup Rolland amp Hughes2005) embedded training display technology for theArmyrsquos future combat vehicles (Rodrigez Foglia ampRolland 2003) 3D manufacturing design collabora-tion (Rolland Biocca et al 2004) have been devel-oped in our laboratory Also the HMPD in its origi-nal prototype form with a 52deg ultralightweight opticslies at the core of the Aztec Explorer application de-veloped at the 3DVIS Lab that investigates variousinteraction schemes within SCAPE (stereoscopic col-laboration in augmented and projective environ-ments) that consists of an integration of the HMPDtogether with the HiBall3000 head tracker by 3rdTech (www3rdtechcom) a SCAPE API developedin the 3DVIS Lab the CAVERN G2 API networkinglibrary (wwwopenchanelsoftwareorg) and a tracked5DT Dataglove (Hua Brown amp Gao 2004)

In summary the HMPD technology we have devel-oped currently provides a fine balance of affordabilityand unique capabilities such as (1) spanning the virtual-ity continuum allowing both full immersion and mixedreality which may open a set of new capabilities acrossvarious applications (2) enabling teleportal capabilitywith face-to-face interaction (3) creating ultralight-weight wide FOVs mobile and secure displays (4) creat-ing small low cost desktop technology or on largerscale quickly deployable 3D visualization centers or dis-plays

51 Medical Visualization andAugmented Reality Medical TrainingTools

In the ARCs multiple users may interact on thevisualization of 3D medical models as shown in Figure9 or practice procedures on AR human patient simula-tors (HPS) with a teaching module referred to as theultimate intubation head (UIH) as shown in Figure 10which we shall now further detail Because as detailedearlier in the paper the light projected from one userreturns only to that user there is no cross talk betweenusers and thus multiple users can coexist with no over-lapping graphics in the ARCs

Figure 9 Concept of multiusers interacting in the ARC Figure 10 Illustration of the endotracheal intubation training tool

Rolland et al 541

The UIH is a training tool in development in theODA Lab for endotracheal intubation (ETI) based onHMPD and ARC technology (Rolland Hamza-Lup etal 2003) (Hamza-Lup Rolland amp Hughes 2005) Itis aimed at medical students residents physician assis-tants pre-hospital care personnel nurse-anesthetistsexperienced physicians and any medical personnel whoneed to perform this common but critical procedure ina safe and rapid sequence

The system trains a wide range of clinicians in safelysecuring the airway during cardiopulmonary resuscita-tion as ensuring immediate ventilation andor oxygen-ation is critical for a number of reasons Firstly ETIwhich consists of inserting an endotracheal tubethrough the mouth into the trachea and then sealingthe trachea so that all air passes through the tube is alifesaving procedure Secondly the need for ETI canoccur in many places in and out of the hospital Per-haps the most important reason for training clinicians inETI however is the inherent difficulty associated withthe procedure (American Heart Association 1992Walls Barton amp McAfee 1999)

Current teaching methods lack flexibility in more

than one sense The most widely used model is a plasticor latex mannequin commonly used to teach advancedcardiac life support (ACLS) techniques including air-way management The neck and oropharynx are usuallydifficult to manipulate without inadvertently ldquobreakingrdquothe modelrsquos teeth or ldquodislocatingrdquo the cervical spinebecause of the awkward hand motions required A rela-tively recent development is the HPS a mannequin-based simulator The HPS is similar to the existingACLS models but the neck and airway are often moreflexible and lifelike and can be made to deform and re-lax to simulate real scenarios The HPS can simulateheart and lung sounds and provide palpable pulses aswell as realistic chest movement The simulator is inter-active but requires real-time programming and feed-back from an instructor (Murray amp Schneider 1997)Utilizing a HPS combined with 3D AR visualization ofthe airway anatomy and the endotracheal tube para-medics will be able to obtain a visual and tactile sense ofproper ETI The UIH will allow paramedics to practicetheir skills and provide them with the visual feedbackthey could not obtain otherwise

Intubation on the HPS is shown in Figure 11a The

Figure 11 (a) A user training on an intubation procedure (b) Lung model superimposed on the HPS using AR (3D model of the Visible

Human Dataset lung Courtesy of Celina Imielinska Columbia University)

542 PRESENCE VOLUME 14 NUMBER 5

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 3: Development of Head-Mounted - University of Central Florida

presence of distributed team members Monitors withshutter glasses are limited in teamwork capabilities be-cause of the size of the display space and the lack of im-mersion Among HMDs AR displays that rely on opti-cal see-through capability (Rolland amp Fuchs 2001) arebest suited for collaborative interfaces because they en-able the possibility of undistorted 3D visualization of3D models while they also preserve the natural interac-tions multiple users may have within one collaborativesite They are weak however at providing natural inter-actions among multiple users located at remote sitesgiven that they do not support face-to-face interactionsamong distributed users Also unless ultralight weight(ie 10 g) optics can be designed with a reasonableFOV (ie at least 40deg) HMDs will limit usability re-gardless of their type Technologies such as the CAVEand Powerwalls have been especially conceived for thedevelopment of collaborative environments(httppinotlcseumneduresearchpowerwallpowerwallhtml) The strength of these technologies liesin the vivid sense of immersion they provide for multi-ple users However these technologies are also weak atsupporting social interaction for local teams workingwith the models because only one user can view the 3Dmodels accurately without perceptual distortions leav-ing other members of the work team viewing distortedmodels Furthermore the technology does not supportface-to-face interactions among distributed users CAVEtechnology is also typically costly because of the needfor multiple large projectors and they require a largefootprint because of the rear projection on largescreens

An HMPD may be conceptually thought of as a pairof miniature projectors one for each eye mounted onthe head coupled with retro-reflective material strategi-cally placed in the environment Retro-reflective mate-rial has the property that ideally any ray of light hittingthe material is redirected along its incident direction asopposed to being diffused as common to conventionalprojection systems Naturally such an imaging schemeraises two main questions (1) How can a projector beminiaturized to the extent that it can be mounted on

the head and (2) How can retro-reflective material pro-vide imaging capability

Insight into these questions may be reached by com-paring the HMPD imaging capability to existing tech-nology Consider first conventional projector systemssuch as those typically found in conference rooms Thelight projected on the screen reflects and diffuses in alldirections allowing multiple viewers in the room to col-lect a small portion of the light diffused back towardsthem This small amount of diffused light enables all tosee the projected images Such projection systems re-quire extremely high power illumination and thus theirsize They may require dimmed light in the room sothat the diffusing screen is not washed out by ambientlight

Now consider how HMPD uses light The light fromthe head-worn projectors is projected towards opticalretro-reflective screens The light is then redirected in asmall cone back towards the source after reaching theoptical retro-reflective material This maximizes thepower received by the user whose eye (ie the right eyefor the right-eye projected image) is located within thepath of the small cone of reflected light (We shall ex-plain the technology in further detail in Section 3)Therefore in spite of the low power projection systemmounted on the head bright images can be perceivedby not only one user but each user of the screen as theywill not be sharing the same reflected light Outside thepath of reflection no light will be received because thecone of light along the path is small enough to be im-perceptible by the other eye of the user Therefore nolight leakage or cross talk is possible either between theeyes of one user or between users Because there is nocross talk such technology interestingly can also allowfor private or secure information to be viewed in a pub-lic work setting

The HMPD designs have evolved significantly sincethey were conceived Fisher (1996) pioneered the con-cept of combining a projector mounted on the headwith retro-reflective material strategically placed in theenvironment The system proposed was biocular withone microdisplay architecture and the same image waspresented to both eyes Fergason (1997) extended the

530 PRESENCE VOLUME 14 NUMBER 5

conceptual design to binocular displays He also made aconceptual improvement to the technology consistingof adding retro-reflective material located at 90 degreesfrom the material placed strategically in the environ-ment to increase light throughput It is important tonote that a key benefit of the HMPD in the originalFisher configuration is providing natural occlusion ofvirtual objects by real objects interposed between theHMPD and the retro-reflective materialmdashsuch as ahand reaching out to grab a virtual object as furtherdescribed in Section 3 Fergasonrsquos dual retro-reflectivematerial constitutes an improvement in system illumina-tion However it also compromises the natural unidirec-tional occlusion inherently present in the Fisher HMPD

Early demonstrations of the HMPD technologybased on commercially available optics were firstdemonstrated by Kijima and Ojika (1997) and inde-pendently by Parsons and Rolland (1998) and (Rol-land Parsons Poizat amp Hancock 1998) Shortlyafter Tachi and his group first developed a configura-tion named MEDIA Xrsquotal which was not head-mounted yet used two projectors positioned similarlyto that in a HMPD together with retro-reflective ma-terial (Kawakami et al 1999) He then also appliedthe concept to a HMPD (Inami et al 2000)

The Holy Grail of HMD technology development islightweight ergonomic design and a fundamental ques-tion is whether projection optics inherently provides anadvantage Early in the development of the HMPDtechnology we envisioned a mobile lightweight multi-user system and therefore sought to develop light-weight compact optics A custom-designed 23 g per eyecompact projection optics using commercially off-the-shelf optical lenses was first designed by Poizat and Rol-land (1998) The design was supported by an analysis ofretro-reflective materials for imaging (Girardot amp Rol-land 1999) and a study of the engineering of HMPD(Hua Girardot Gao amp Rolland 2000) An ultralight-weight (ie less than 8 g per eye) 52deg diagonal FOVand color corrected projection optics was next designedusing a 135 in diagonal microdisplay and asphericaland diffractive optical elements (DOEs) (Hua amp Rol-land 2004) The design of a 70deg FOV followed using

the same microdisplay (Ha amp Rolland 2004) Next a42deg FOV projection optics was designed using a 06 indiagonal organic light emitting display (OLED) (Mar-tins Ha amp Rolland 2003)

Such design approaches based on DOEs not onlyprovide ultralightweight solutions but also importantlyprovide an elegant design approach in terms of chro-matic aberration correction One main insight into sucha correction comes from realizing that a DOE may bethought of as an optical grating that follows the laws ofoptical diffraction Consequently a DOE has a strongnegative chromatic dispersion that can be used to achro-matize refractive lenses A trade-off associated with us-ing DOEs is a potential small loss in efficiency whichcan affect the contrast of high-contrast targets Thus indesigning such systems careful quantification of DOEefficiency across the visible spectrum must be performedto ensure good image contrast and optimal performancefor the system specification

The steady progress toward engineering lightweightHMPDs is shown in Figure 1 Early in the design weconsidered both top- and side-mounted configurationsWe originally chose a top-mounted configuration asshown in Figure 1a for two reasons (1) To provide thebest possible alignment of the optics (2) To more easilymount the bulky electronics Upon a successful firstprototype in collaboration with NVIS Inc we had themain electronics board associated with the microdisplayplaced away from it in a control box and we investi-gated a side-mounted display driven by a medical train-ing application further described in Section 51 In thisapplication the trainees must bend over a human pa-tient simulator (HPS) and moving the packaging weightto the side was desired This approach led to theHMPD shown in Figure 1b We learned that such aconfiguration successfully moved weight to the sidehowever it also provided some tunnel vision as theuserrsquos peripheral vision was rather occluded by the side-mounted optics A recent design shown in Figure 1ccapitalized on ultracompact electronics associated withOLED microdisplays (eg httpwwwemagincom) To avoid the tunnel vision aspect which we foundvery disturbing we targeted a top-mounted geometry

Rolland et al 531

to allow full peripheral vision Also because of the ex-tremely compact electronics of the microdisplay in thiscase the overall system is less than 600 g including ca-bles and can further be lightened past this early proto-type level as metallic structures still remain Having in-

troduced the evolution of these systems we considerthe optics of these systems in greater detail in the fol-lowing section because the optical design is critical totheir future development making HMPDs light brightwide FOV wearable and mobile

Figure 1 Steady progress toward engineering lightweight HMPDs (a) 2001 HMPDampFaceCapture Optics mount Top-mounted microdisplay

AM-LCD Pixel resolution (640 3) 480 (b) 2003 HMPD Optics mount Side-mounted microdisplay AM-LCD Pixel resolution (640

3) 480 (c) 2004 HMPD Optics mount Top-mounted microdisplay OLED Pixel resolution (800 3) 600

532 PRESENCE VOLUME 14 NUMBER 5

3 Optics of the Head-Mounted ProjectionDisplay (HMPD) Making the DisplaysLight Bright Wide FOV Wearable andMobile

The optics of the HMPD consists essentially of(1) two microdisplays one associated with each eye and(2) associated projection optics that guides the pro-jected images toward retro-reflective material The opti-cal property of such material is to retro-reflect light to-ward its direction of origin or equivalently the eyes ofthe user in our optical configuration given that the po-sition of the eyes of the user are made conjugate to theposition of the exit pupils of the optics via a beam split-ter as shown in Figure 2b Therefore there are twounique optical components (1) projection optics ratherthan eyepiece optics as used in conventional HMDsand (2) a retro-reflective material strategically placed inthe environment rather than a diffusing screen as typi-cally used in projection-based systems distinguish theHMPD technology from conventional HMDs and ste-reoscopic projection displays such as the CAVE

31 Projection Optics vs EyepieceOptics Lighter Greater Field-of-Viewand Less Distortion

A comparison of HMDs based on eyepiece opticsversus the projection optics of a HMPD is shown inFigure 2ab An important feature of eyepiece optics in aHMD is the propagation of the light solely within theHMD between the microdisplay and the eye In thecase of projection optics in an HMPD the light actuallypropagates in the real environment up to the retro-reflective material before returning to the userrsquos eyesThe implication of this propagation scheme makes fora fundamental difference in the user experience of 3Dobjects and the environment With the HMPD it ispossible to have a user occlude virtual objects withreal objects interposed between the HMPD and thematerial such as the hand of a user reaching out to avirtual object in an artificial reality center (ARC) dis-play as shown in Figure 3

The usage of projection optics allows for larger FOV(ie 40deg diagonal) and less optical distortion (25

Figure 2 (a) Eyepiece optics HMD light emitted from the microdisplay reaches directly the eyes via the orientation of the

beam splitter (b) Projection optics HMD referred to as HMPD light is first sent to the environment before being retro-reflected

toward the eyes of the user a consequence of the orientation of the beam splitter and the positioning of retro-reflective material

in the environment

Rolland et al 533

at the corner of the FOV) than obtained with conven-tional eyepiece-based optical see-through HMDs for anequivalent weight At the foundation of this capability isthe location of the pupil both within the projection op-tics and after the beam splitter

In conventional HMDs using eyepiece optics onemay distinguish between pupil forming and non-pupilforming systems By pupil forming we mean that a dia-phragm (ie an aperture stop more generally called apupil that limits the light passing through the system) islocated within the optical system and is being reimagedat the eye location The eye location is referred to as theeyepoint because in computer graphics a pinhole modelis used to generate the stereoscopic image pairs In thecase of a pupil forming eyepiece an aperture stop is thuslocated within the optics however the final image ofthis stop after the eyepiece and beam splitter is real andlocated outside the optics at the chosen eyepoint forthe generation of the stereoscopic images (Rolland Ha

amp Fidopiastis 2004) The main drawback of an externalexit pupil is that as the FOV increases the optics in-creases in diameter and thus the weight increases as thecube of the diameter For the non-pupil forming eye-piece the pupil (ie the image of the eye iris throughthe cornea) of the eye itself constitutes the limiting ap-erture stop of the eyepiece However the same limita-tion occurs regarding the trade-off in FOV versusweight An advantage of non-pupil forming opticshowever is that the user may benefit from a larger eyebox where eye movements may more freely occur with-out vignetting of the image (ie vignetting refers topartial light loss or full loss of the image)

Projection optics is intrinsically a pupil forming opti-cal system In the case of projection optics an aperturestop is located within the projection lens by design andthe image of this aperture stop known as the exit pupilis virtual (ie it is located to the left of the last surfaceof the projection optics shown schematically in Figure2b) However given the orientation of the beam splitterat 90deg from that used with eyepiece optics the final im-age of the exit pupil after the beam splitter is coincidentwith the eyepoint of the eye as required In the case of avirtual pupil however as the FOV increases the opticssize remains almost invariant Furthermore in the caseof projection optics where the final pupil is virtual it isquite straightforward to design the optics with the pupillocated at or close to the nodal points of the lens (iemathematical first order constructs with unit angularmagnification) In such a case there will be little or nodistortion Correcting for distortion eliminates the needfor real-time distortion correction with software orhardware This property holds for HMPD design withrelatively large FOVs While optical correction is cur-rently readily available for various hardware solutionseliminating the need for distortion correction not onlyminimizes the cost of the system but also eliminatesprocessing Therefore there are no additional systemdelays Furthermore distortion-free optics avoids deteri-oration in image quality that can become more pro-nounced with increasing amounts of required correc-tion The fact that the microdisplay pixels remainsquare regardless of the level of predistortion compen-

Figure 3 Demonstration of occlusion capability of virtual objects by

real objects User reaches out to the trachea in the ARC display and

occludes part of the object with his hand A grayscale image is shown

here however the display is full color (Mandible and Trachea Visible

Human Dataset 3D Models Courtesy of Celina Imielinska Columbia

University)

534 PRESENCE VOLUME 14 NUMBER 5

sation of the rendered image may cause visual discom-fort if the pixels are resolvable as commonly encoun-tered in HMDs Thus distortion-free optics even todaywith high speed processing presents key advantages tomore natural perception in generally speaking HMDs

32 The Retro-Reflective ScreenAny Shape in Any Location

A key property of retro-reflective material is thatrays hitting the surface at any angle are reflected backonto the same incident optical path Thus theoreticallythe perception of the image is independent of the shapeand location of the screen made of such material Suchtechnology can thus be implemented with curved ortilted screens with no apparent distortion In practicedepending on the specifics of the retro-reflective mate-rial some dependence on the shape may be observedoutside a range of bending of the material Also imagequality (ie the sharpness of the image) is in practicelimited by the optical diffraction of the material (Mar-tins amp Rolland 2003) whose impact can be highly sig-nificant for large discrepancies in the location of the ma-terial with respect to the optical image projected by the

optics Thus ideally to minimize the effect of diffrac-tion the material must be placed in close proximity tothe images focused by the projector Furthermore de-pending on the specific properties of the material theamount of blurring quantified as the width of the pointspread function (PSF) may be more or less pronouncedfor the same discrepancy in the location of the projectedimages and the material as now quantified

An analysis of diffraction was conducted on two typesof material a micro-beaded type of material (ieScotchlite 3M Fabric Silver-beaded) and a microcorner-cube type of material (ie Scotchlite 3M FilmSilver-cubed) both shown on a microscopic scale inFigure 4b1 The analysis shown in Figure 4a quantifiesthe spread of light after retro-reflection due to diffrac-tion Measurements made in the laboratory and alsoreported in Figure 4b indicate a good correlation in theresults with the mathematical predictions From thisanalysis it is shown that the micro corner-cube materialwill be superior in maximizing the light throughput andminimizing loss in resolution due to diffraction In a

1 Scotchlite is a registered trademark of the 3M company

Figure 4 (a) Theoretical modeling of the point spread function (PSF) for two kinds of retro-reflective materials (b) 3D

(xyI) measured point spread functions a microscopic view of the two different types of materials is shown above the

PSFs as well as their basic 3D microstructure

Rolland et al 535

companion paper an analysis of human performance inresolving small details with both types of materials isreported (Fidopiastis Furhman Meyer amp Rolland2005)

Finally it is important to note that the FOV of theHMPD is invariant with the location of the material inthe environment and the apparent size of a 3D objectobtained from generating stereoscopic images will alsobe invariant with the location of the user with respect tothe material as long as the head of the user is tracked inposition and orientation as in any AR application

33 Optical Design of HMPDs

The optical design of any HMD including theHMPD is highly driven by the choice of the microdis-plays specifically their size resolution and means ofself-emitting light or requiring illumination optics Thesmaller the microdisplays the higher the required powerof the optics to achieve a given FOV and thus thehigher the number of elements required

The microdisplays and associated electronics firstavailable for this project were 135 in diagonal back-lighting color AM-LCDs (active matrix liquid crystaldisplays) with (640 3) 480 pixels and 42-m pixelsize While higher resolution would be preferred theavailability in size and color of this microdisplay weredeterminant for the choice made A 52deg FOV optics pereye with a 35-mm focal length optics was designed inorder to maintain a visual resolution of less than about 4arc minutes This choice resulted in a predicted 41 arcminutes per pixel in angular resolution horizontally andvertically (Hua Ha amp Rolland 2003) In spite of thelower resolution imposed by the microdisplay largerFOVs optics were explored For example a 70deg FOVoptics was investigated (Ha amp Rolland 2004) togetherwith a discussion of the properties of such optics (Rol-land Biocca et al 2004) Both designs were based onan ultralightweight four-element compact projectionoptics using a combination of DOEs plastic compo-nents and aspheric surfaces While plastic componentsare ideal to design an ultralight system its combinationwith glass components and DOEs also enables higher

image quality The total weight of each lens assemblywas only 6 g The mechanical dimensions of the 52degand 70deg FOVs optics were 20-mm in length by 18-mmin diameter and 15-mm in length by 134 mm in diame-ter respectively For both designs an analysis of perfor-mance determined that the polychromatic modulationtransfer functions displayed more than 40 contrast at25 line-pairsmm for a full size pupil of 12 mm Thedistortion was constrained to be less than 25 acrossthe overall visual fields in both cases

Finally whether the microdisplay is self-emitting orrequires illumination optics may impose additional con-straints on the design and compactness If the microdis-play acts as a mirror such as liquid crystal on silicon(LCOS) displays (Huang et al 2002) the projectionoptics diameter will be larger than the microdisplay toavoid vignetting the footprint of the telecentric lightbeam reflected off the LCOS as it passes through thelens Telecentric means that the central ray of the coneof light from each point in the field of view is parallel tothe optical axis before the LCOS display Furthermoresuch microdisplays require illumination optics and thusthey require additional optical and opto-mechanicalcomponents that will add weight complexity and costto the overall system Advantages of LCOS displays to-day however are their physical size (ie 1 in diago-nal) their brightness and their resolution

For each HMPD design (and this also applies toHMD design in general) the choice of the microdisplayis critical to the success of a final prototype or productand as such a choice must be application driven if theprototype or product are targeted at a real-world appli-cation

4 Design of HMPD Technologies forSpecific Applications The TeleportalHMDP (T-HMPD) and the Mobile HMPD(M-HMPD)

Nonverbal cues regarding what others are thinkingand where they are looking are key sources of informa-tion during real-time collaboration among workmates

536 PRESENCE VOLUME 14 NUMBER 5

especially when movements must be coordinated as incollaborative object manipulation The direction ofanotherrsquos gaze is a key source of information as to theotherrsquos visual attention For example it helps disambig-uate spatially ambiguous but common everyday phrasessuch as ldquoget the tool over thererdquo Facial expressions sup-plemented by language provide teammates with insightto the intentions moods and meanings communicatedby others

Video conferencing systems provide some informa-tion on visual expressions but fail to provide accuratecues of spatial attention and are poor at supportingphysical collaboration because collaborators lack a com-mon action space VR systems using HMPD displayscan better create a common action space and supportnaturalistic hand manipulation of 3D virtual objectsduring immersive collaboration but facial expressionsare partially occluded by the HMPD for local partici-pants and further masked for remote participants Akey challenge in immersive collaborative systems ishow to add the important information channels offacial expressions and visual attention into a distrib-uted AR interface An emerging version of theHMPD design tailored for face-to-face interactionand referred to as the teleportal HMPD (T-HMPD)will be described in Section 41

Another challenge in creating distributed collabora-tive environments is how to create mobile systems basedon HMPD technology given that collaboration mayalso take place as we navigate through a real environ-ment such as a museum or a city In such cases it is notpossible to position retro-reflective material strategicallyin the environment however it would be advantageousbased on the ultralightweight of the optics of HMPDsto expand the technology to mobile systems A mobileHMPD (M-HMPD) will be described in Section 42

41 The Teleportal HMPD (T-HMPD)

The T-HMPD integrates optical image process-ing and display technologies to capture transport andreconstruct an AR 3D model of the head and facial ex-

pression of a remote collaborator networked to a localpartner The T-HMPD is a hybrid optical and videosee-through HMD composed of an HMPD combinedwith a pair of lipstick video cameras and two miniaturemirrors mounted to the side and slightly forward of theface of the user as shown schematically in Figure 5(Biocca amp Rolland 2004) The configuration cap-tures stereoscopic video of the userrsquos face includingboth sides of the face without occlusion with mini-mal interference within the userrsquos visual field and onlyminimal occlusion of the face as viewed by otherphysical participants at the local site Unlike room-basedvideo the head-worn camera and mirror system cap-tures the full face of users no matter where they arelooking and regardless of their locations as long as thecameras are connected

Figure 6ab show the left and right views respec-tively of the lipstick cameras through the miniaturemirrors (ie 1 in in diameter in this case) in one of ourfirst tests The radius of curvature of the convex surfaceleading to the face capture shown was selected to be65-mm from applying basic optical imaging equationsbetween a small 4 mm focal length lipstick camera andthe face In the first implementation the lipstick videocameras were Sony Electronics DXCLS11 Adjustablerods were designed to mount the two mirrors in orderto experiment with various configurations of mirrorscamera lenses and distances from the face and the two

Figure 5 General layout of the minimally intrusive stereoscopic face

capture system

Rolland et al 537

mirrors Optimization of the optical face capture systemis under optimization for robustness minimization ofshadows on the face and placement of the mirror sideof the beam splitter to capture the face of any partici-

pant with no interference from the projector light or thebeam splitter Image processing algorithms under de-velopment at the MIND Lab (Media Interface and Net-work Design Lab) in collaboration with the ODA Lab

Figure 6 Face images captured using the FCS (a) left image and (b) right image (c) Virtual frontal view generated from two side view

cameras mounted in this case to a table as opposed to the head for feasibility study (Courtesy frontal view from C Reddy and G Stockman

Michigan State University)

538 PRESENCE VOLUME 14 NUMBER 5

(Optical Diagnostics and Applications Lab) unwrap thedistorted images from the cameras and produce a com-posite video texture from the two images (Reddy 2003Reddy Stockman Rolland amp Biocca 2004) A feasibil-ity of creating a frontal virtual view from two lipstickcameras mounted to the side and slightly forward of theface is shown in Figure 6c where the cameras in thiscase were mounted to a table The composite stereovideo texture can be sent directly via high bandwidthconnection or mapped to a 3D mesh of the userrsquos faceThe 3D face of the user can be projected in the localspace as a 3D AR object in the virtual environmentand placed in the analogous spatial relation to othersand virtual objects using the tracker data of the re-mote site Alternatively the teleportal 3D face-to-facemodel can be projected onto a head model covered inretro-reflective material or as a 3D video on the wallof an office or conference room as in traditional tele-conferencing systems As the algorithm for stereo facecapture and reconstruction matures we are preparingto test the algorithm in various presentation scenariosincluding a retro-reflective ball and head model andother embodiments of the remote others will be cre-ated In optimizing the presentation of informationto create the maximum sense of presence we mayinvestigate how to best display the remote faces Forexample we may employ a retro-reflective tabletopwhere 3D scientific visualization may be shared or aretro-reflective wall that would open a common win-dow to both distributed visualization and social envi-ronments

For local participants wearing HMPDs a participantwould see the eyes of the other participant illuminatedhowever the projector is too weak to shine light in theeyes of a side participant The illuminated eyes may beturned off automatically when two local participantsspeak face-to-face to each other (while their face is stillbeing captured) and the projector can be automaticallyturned back on when facing any retro-reflective mate-rial This feature has not yet been implemented Thereis no such issue for remote participants given how theface is captured

42 Mobile Head-Mounted ProjectionDisplay (M-HMPD)

In order to expand the use of HMPD to appli-cations that may not be able to allow placing of retro-reflective material in the environment a novel displaybased on the HMPD was recently conceived (RollandMartins amp Ha 2003) A schematic of the display isshown in Figure 7 The main novelty of the display liesin removing the fabric from the environment and solelyintegrating the retro-reflective material within theHMPD for imaging using additional optics between thebeam splitter and the material to optically image thematerial at a remote distance from the user A preferredlocation for the material is in coincidence with the mon-ocular virtual images of the HMPD to minimize theeffects of diffraction imposed by the microstructures ofthe optical material Without the additional optics weestimated that in the best case scenario visual acuitywould have been limited to about 10 minutes of arceven with a finer resolution of the microdisplay whichwould be visually unacceptable Thus when the materialwithin the HMPD is not used simply for increased illu-mination the imaging optics between the integratedmaterial and the beam splitter is absolutely required inorder to provide adequate overall resolution of theviewed images The additional optics is illustrated in

Figure 7 Conceptual design of a mobile HMPD (M-HMPD)

Rolland et al 539

Figure 7 with a Fresnel lens for low weight and com-pactness

Because the image formed by the projection optics isminimized at the location of the retro-reflective materialplaced within the M-HMPD a difference with the basicHMPD is that smaller microstructures are required (ieon the order of 10 m instead of 100 m) The de-tailed optical design of the lens is similar to that of thefirst ODA Lab HMPD optics except that the microdis-play chosen is a 06 in diagonal OLED and higher res-olution 800 600 pixels Details of the optical designwere reported elsewhere (Martins Ha amp Rolland2003) A challenge associated with the development ofthe M-HMPD is the design and fabrication of about10 m scale microstructure retro-reflective materialSuch microstructures are not commercially available andcustom-design materials are being investigated

Because of its stand-alone capability the M-HMPDextends the use of HMPDs to clinical guided surgerymedical simulation and training wearable computersmobile secure displays and distributed collaborative dis-plays as well as outdoor augmented see-through virtualenvironments The visual results of the M-HMPD com-pare in essence to that of eyepiece-based see-throughAR systems currently available The difference lieshowever in the higher image quality (ie lower blur) adistortion-free image and a more compact optics

5 A Review of CollaborativeApplications Medical Applicationsand Infospaces

The HMPD facilitates the development of collab-orative environments that allow seamless transitionsthrough different levels of immersion from AR to a fullvirtual reality experience (Milgram amp Kishino 1994Davis et al 2003) In the artificial reality center (ARC)users can navigate between various levels of immersionthat occur on the basis of where users position them-selves with respect to the retro-reflective material TheARC presented at ISMAR 2002 (Hamza-Lup et al2002) together with a remote collaboration applicationbuilt on top of DARE (distributed augmented realityenvironment) (Hamza-Lup Davis Hughes amp Rolland2002) consists of a generally curved or shaped retro-reflective wall an HMPD with miniature optics a com-mercially available optical tracking system and Linux-based PCs The ARCs may take different shapes andsizes and are importantly quickly deployable either in-doors or outdoors Since the year 2001 we built twodeployable (10 minute setup) displays 10-ft wide by7-ft high as well as a deployable (30 minute setup)15-ft diameter center networked to some of the otherARCs as shown in Figure 8 However all displays aim toprovide multiuser capability including remotely located

Figure 8 Networked open environments with artificial reality centers (NOEs ARCs)

540 PRESENCE VOLUME 14 NUMBER 5

users (Hamza-Lup amp Rolland 2004b) as well as 3Dmultisensory visualization including 3D sound hapticand smell (Rolland Stanney et al 2004) The userrsquosmotion becomes the computer interface device in con-trast to systems that may resort to various display de-vices (Billinghurst Kato amp Poupyrev 2001)

Variants of the HMPD optics targeted at specificapplications such as 3D medical visualization and dis-tributed AR medical training tools (Rolland Hamza-Lup et al 2003 Hamza-Lup Rolland amp Hughes2005) embedded training display technology for theArmyrsquos future combat vehicles (Rodrigez Foglia ampRolland 2003) 3D manufacturing design collabora-tion (Rolland Biocca et al 2004) have been devel-oped in our laboratory Also the HMPD in its origi-nal prototype form with a 52deg ultralightweight opticslies at the core of the Aztec Explorer application de-veloped at the 3DVIS Lab that investigates variousinteraction schemes within SCAPE (stereoscopic col-laboration in augmented and projective environ-ments) that consists of an integration of the HMPDtogether with the HiBall3000 head tracker by 3rdTech (www3rdtechcom) a SCAPE API developedin the 3DVIS Lab the CAVERN G2 API networkinglibrary (wwwopenchanelsoftwareorg) and a tracked5DT Dataglove (Hua Brown amp Gao 2004)

In summary the HMPD technology we have devel-oped currently provides a fine balance of affordabilityand unique capabilities such as (1) spanning the virtual-ity continuum allowing both full immersion and mixedreality which may open a set of new capabilities acrossvarious applications (2) enabling teleportal capabilitywith face-to-face interaction (3) creating ultralight-weight wide FOVs mobile and secure displays (4) creat-ing small low cost desktop technology or on largerscale quickly deployable 3D visualization centers or dis-plays

51 Medical Visualization andAugmented Reality Medical TrainingTools

In the ARCs multiple users may interact on thevisualization of 3D medical models as shown in Figure9 or practice procedures on AR human patient simula-tors (HPS) with a teaching module referred to as theultimate intubation head (UIH) as shown in Figure 10which we shall now further detail Because as detailedearlier in the paper the light projected from one userreturns only to that user there is no cross talk betweenusers and thus multiple users can coexist with no over-lapping graphics in the ARCs

Figure 9 Concept of multiusers interacting in the ARC Figure 10 Illustration of the endotracheal intubation training tool

Rolland et al 541

The UIH is a training tool in development in theODA Lab for endotracheal intubation (ETI) based onHMPD and ARC technology (Rolland Hamza-Lup etal 2003) (Hamza-Lup Rolland amp Hughes 2005) Itis aimed at medical students residents physician assis-tants pre-hospital care personnel nurse-anesthetistsexperienced physicians and any medical personnel whoneed to perform this common but critical procedure ina safe and rapid sequence

The system trains a wide range of clinicians in safelysecuring the airway during cardiopulmonary resuscita-tion as ensuring immediate ventilation andor oxygen-ation is critical for a number of reasons Firstly ETIwhich consists of inserting an endotracheal tubethrough the mouth into the trachea and then sealingthe trachea so that all air passes through the tube is alifesaving procedure Secondly the need for ETI canoccur in many places in and out of the hospital Per-haps the most important reason for training clinicians inETI however is the inherent difficulty associated withthe procedure (American Heart Association 1992Walls Barton amp McAfee 1999)

Current teaching methods lack flexibility in more

than one sense The most widely used model is a plasticor latex mannequin commonly used to teach advancedcardiac life support (ACLS) techniques including air-way management The neck and oropharynx are usuallydifficult to manipulate without inadvertently ldquobreakingrdquothe modelrsquos teeth or ldquodislocatingrdquo the cervical spinebecause of the awkward hand motions required A rela-tively recent development is the HPS a mannequin-based simulator The HPS is similar to the existingACLS models but the neck and airway are often moreflexible and lifelike and can be made to deform and re-lax to simulate real scenarios The HPS can simulateheart and lung sounds and provide palpable pulses aswell as realistic chest movement The simulator is inter-active but requires real-time programming and feed-back from an instructor (Murray amp Schneider 1997)Utilizing a HPS combined with 3D AR visualization ofthe airway anatomy and the endotracheal tube para-medics will be able to obtain a visual and tactile sense ofproper ETI The UIH will allow paramedics to practicetheir skills and provide them with the visual feedbackthey could not obtain otherwise

Intubation on the HPS is shown in Figure 11a The

Figure 11 (a) A user training on an intubation procedure (b) Lung model superimposed on the HPS using AR (3D model of the Visible

Human Dataset lung Courtesy of Celina Imielinska Columbia University)

542 PRESENCE VOLUME 14 NUMBER 5

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 4: Development of Head-Mounted - University of Central Florida

conceptual design to binocular displays He also made aconceptual improvement to the technology consistingof adding retro-reflective material located at 90 degreesfrom the material placed strategically in the environ-ment to increase light throughput It is important tonote that a key benefit of the HMPD in the originalFisher configuration is providing natural occlusion ofvirtual objects by real objects interposed between theHMPD and the retro-reflective materialmdashsuch as ahand reaching out to grab a virtual object as furtherdescribed in Section 3 Fergasonrsquos dual retro-reflectivematerial constitutes an improvement in system illumina-tion However it also compromises the natural unidirec-tional occlusion inherently present in the Fisher HMPD

Early demonstrations of the HMPD technologybased on commercially available optics were firstdemonstrated by Kijima and Ojika (1997) and inde-pendently by Parsons and Rolland (1998) and (Rol-land Parsons Poizat amp Hancock 1998) Shortlyafter Tachi and his group first developed a configura-tion named MEDIA Xrsquotal which was not head-mounted yet used two projectors positioned similarlyto that in a HMPD together with retro-reflective ma-terial (Kawakami et al 1999) He then also appliedthe concept to a HMPD (Inami et al 2000)

The Holy Grail of HMD technology development islightweight ergonomic design and a fundamental ques-tion is whether projection optics inherently provides anadvantage Early in the development of the HMPDtechnology we envisioned a mobile lightweight multi-user system and therefore sought to develop light-weight compact optics A custom-designed 23 g per eyecompact projection optics using commercially off-the-shelf optical lenses was first designed by Poizat and Rol-land (1998) The design was supported by an analysis ofretro-reflective materials for imaging (Girardot amp Rol-land 1999) and a study of the engineering of HMPD(Hua Girardot Gao amp Rolland 2000) An ultralight-weight (ie less than 8 g per eye) 52deg diagonal FOVand color corrected projection optics was next designedusing a 135 in diagonal microdisplay and asphericaland diffractive optical elements (DOEs) (Hua amp Rol-land 2004) The design of a 70deg FOV followed using

the same microdisplay (Ha amp Rolland 2004) Next a42deg FOV projection optics was designed using a 06 indiagonal organic light emitting display (OLED) (Mar-tins Ha amp Rolland 2003)

Such design approaches based on DOEs not onlyprovide ultralightweight solutions but also importantlyprovide an elegant design approach in terms of chro-matic aberration correction One main insight into sucha correction comes from realizing that a DOE may bethought of as an optical grating that follows the laws ofoptical diffraction Consequently a DOE has a strongnegative chromatic dispersion that can be used to achro-matize refractive lenses A trade-off associated with us-ing DOEs is a potential small loss in efficiency whichcan affect the contrast of high-contrast targets Thus indesigning such systems careful quantification of DOEefficiency across the visible spectrum must be performedto ensure good image contrast and optimal performancefor the system specification

The steady progress toward engineering lightweightHMPDs is shown in Figure 1 Early in the design weconsidered both top- and side-mounted configurationsWe originally chose a top-mounted configuration asshown in Figure 1a for two reasons (1) To provide thebest possible alignment of the optics (2) To more easilymount the bulky electronics Upon a successful firstprototype in collaboration with NVIS Inc we had themain electronics board associated with the microdisplayplaced away from it in a control box and we investi-gated a side-mounted display driven by a medical train-ing application further described in Section 51 In thisapplication the trainees must bend over a human pa-tient simulator (HPS) and moving the packaging weightto the side was desired This approach led to theHMPD shown in Figure 1b We learned that such aconfiguration successfully moved weight to the sidehowever it also provided some tunnel vision as theuserrsquos peripheral vision was rather occluded by the side-mounted optics A recent design shown in Figure 1ccapitalized on ultracompact electronics associated withOLED microdisplays (eg httpwwwemagincom) To avoid the tunnel vision aspect which we foundvery disturbing we targeted a top-mounted geometry

Rolland et al 531

to allow full peripheral vision Also because of the ex-tremely compact electronics of the microdisplay in thiscase the overall system is less than 600 g including ca-bles and can further be lightened past this early proto-type level as metallic structures still remain Having in-

troduced the evolution of these systems we considerthe optics of these systems in greater detail in the fol-lowing section because the optical design is critical totheir future development making HMPDs light brightwide FOV wearable and mobile

Figure 1 Steady progress toward engineering lightweight HMPDs (a) 2001 HMPDampFaceCapture Optics mount Top-mounted microdisplay

AM-LCD Pixel resolution (640 3) 480 (b) 2003 HMPD Optics mount Side-mounted microdisplay AM-LCD Pixel resolution (640

3) 480 (c) 2004 HMPD Optics mount Top-mounted microdisplay OLED Pixel resolution (800 3) 600

532 PRESENCE VOLUME 14 NUMBER 5

3 Optics of the Head-Mounted ProjectionDisplay (HMPD) Making the DisplaysLight Bright Wide FOV Wearable andMobile

The optics of the HMPD consists essentially of(1) two microdisplays one associated with each eye and(2) associated projection optics that guides the pro-jected images toward retro-reflective material The opti-cal property of such material is to retro-reflect light to-ward its direction of origin or equivalently the eyes ofthe user in our optical configuration given that the po-sition of the eyes of the user are made conjugate to theposition of the exit pupils of the optics via a beam split-ter as shown in Figure 2b Therefore there are twounique optical components (1) projection optics ratherthan eyepiece optics as used in conventional HMDsand (2) a retro-reflective material strategically placed inthe environment rather than a diffusing screen as typi-cally used in projection-based systems distinguish theHMPD technology from conventional HMDs and ste-reoscopic projection displays such as the CAVE

31 Projection Optics vs EyepieceOptics Lighter Greater Field-of-Viewand Less Distortion

A comparison of HMDs based on eyepiece opticsversus the projection optics of a HMPD is shown inFigure 2ab An important feature of eyepiece optics in aHMD is the propagation of the light solely within theHMD between the microdisplay and the eye In thecase of projection optics in an HMPD the light actuallypropagates in the real environment up to the retro-reflective material before returning to the userrsquos eyesThe implication of this propagation scheme makes fora fundamental difference in the user experience of 3Dobjects and the environment With the HMPD it ispossible to have a user occlude virtual objects withreal objects interposed between the HMPD and thematerial such as the hand of a user reaching out to avirtual object in an artificial reality center (ARC) dis-play as shown in Figure 3

The usage of projection optics allows for larger FOV(ie 40deg diagonal) and less optical distortion (25

Figure 2 (a) Eyepiece optics HMD light emitted from the microdisplay reaches directly the eyes via the orientation of the

beam splitter (b) Projection optics HMD referred to as HMPD light is first sent to the environment before being retro-reflected

toward the eyes of the user a consequence of the orientation of the beam splitter and the positioning of retro-reflective material

in the environment

Rolland et al 533

at the corner of the FOV) than obtained with conven-tional eyepiece-based optical see-through HMDs for anequivalent weight At the foundation of this capability isthe location of the pupil both within the projection op-tics and after the beam splitter

In conventional HMDs using eyepiece optics onemay distinguish between pupil forming and non-pupilforming systems By pupil forming we mean that a dia-phragm (ie an aperture stop more generally called apupil that limits the light passing through the system) islocated within the optical system and is being reimagedat the eye location The eye location is referred to as theeyepoint because in computer graphics a pinhole modelis used to generate the stereoscopic image pairs In thecase of a pupil forming eyepiece an aperture stop is thuslocated within the optics however the final image ofthis stop after the eyepiece and beam splitter is real andlocated outside the optics at the chosen eyepoint forthe generation of the stereoscopic images (Rolland Ha

amp Fidopiastis 2004) The main drawback of an externalexit pupil is that as the FOV increases the optics in-creases in diameter and thus the weight increases as thecube of the diameter For the non-pupil forming eye-piece the pupil (ie the image of the eye iris throughthe cornea) of the eye itself constitutes the limiting ap-erture stop of the eyepiece However the same limita-tion occurs regarding the trade-off in FOV versusweight An advantage of non-pupil forming opticshowever is that the user may benefit from a larger eyebox where eye movements may more freely occur with-out vignetting of the image (ie vignetting refers topartial light loss or full loss of the image)

Projection optics is intrinsically a pupil forming opti-cal system In the case of projection optics an aperturestop is located within the projection lens by design andthe image of this aperture stop known as the exit pupilis virtual (ie it is located to the left of the last surfaceof the projection optics shown schematically in Figure2b) However given the orientation of the beam splitterat 90deg from that used with eyepiece optics the final im-age of the exit pupil after the beam splitter is coincidentwith the eyepoint of the eye as required In the case of avirtual pupil however as the FOV increases the opticssize remains almost invariant Furthermore in the caseof projection optics where the final pupil is virtual it isquite straightforward to design the optics with the pupillocated at or close to the nodal points of the lens (iemathematical first order constructs with unit angularmagnification) In such a case there will be little or nodistortion Correcting for distortion eliminates the needfor real-time distortion correction with software orhardware This property holds for HMPD design withrelatively large FOVs While optical correction is cur-rently readily available for various hardware solutionseliminating the need for distortion correction not onlyminimizes the cost of the system but also eliminatesprocessing Therefore there are no additional systemdelays Furthermore distortion-free optics avoids deteri-oration in image quality that can become more pro-nounced with increasing amounts of required correc-tion The fact that the microdisplay pixels remainsquare regardless of the level of predistortion compen-

Figure 3 Demonstration of occlusion capability of virtual objects by

real objects User reaches out to the trachea in the ARC display and

occludes part of the object with his hand A grayscale image is shown

here however the display is full color (Mandible and Trachea Visible

Human Dataset 3D Models Courtesy of Celina Imielinska Columbia

University)

534 PRESENCE VOLUME 14 NUMBER 5

sation of the rendered image may cause visual discom-fort if the pixels are resolvable as commonly encoun-tered in HMDs Thus distortion-free optics even todaywith high speed processing presents key advantages tomore natural perception in generally speaking HMDs

32 The Retro-Reflective ScreenAny Shape in Any Location

A key property of retro-reflective material is thatrays hitting the surface at any angle are reflected backonto the same incident optical path Thus theoreticallythe perception of the image is independent of the shapeand location of the screen made of such material Suchtechnology can thus be implemented with curved ortilted screens with no apparent distortion In practicedepending on the specifics of the retro-reflective mate-rial some dependence on the shape may be observedoutside a range of bending of the material Also imagequality (ie the sharpness of the image) is in practicelimited by the optical diffraction of the material (Mar-tins amp Rolland 2003) whose impact can be highly sig-nificant for large discrepancies in the location of the ma-terial with respect to the optical image projected by the

optics Thus ideally to minimize the effect of diffrac-tion the material must be placed in close proximity tothe images focused by the projector Furthermore de-pending on the specific properties of the material theamount of blurring quantified as the width of the pointspread function (PSF) may be more or less pronouncedfor the same discrepancy in the location of the projectedimages and the material as now quantified

An analysis of diffraction was conducted on two typesof material a micro-beaded type of material (ieScotchlite 3M Fabric Silver-beaded) and a microcorner-cube type of material (ie Scotchlite 3M FilmSilver-cubed) both shown on a microscopic scale inFigure 4b1 The analysis shown in Figure 4a quantifiesthe spread of light after retro-reflection due to diffrac-tion Measurements made in the laboratory and alsoreported in Figure 4b indicate a good correlation in theresults with the mathematical predictions From thisanalysis it is shown that the micro corner-cube materialwill be superior in maximizing the light throughput andminimizing loss in resolution due to diffraction In a

1 Scotchlite is a registered trademark of the 3M company

Figure 4 (a) Theoretical modeling of the point spread function (PSF) for two kinds of retro-reflective materials (b) 3D

(xyI) measured point spread functions a microscopic view of the two different types of materials is shown above the

PSFs as well as their basic 3D microstructure

Rolland et al 535

companion paper an analysis of human performance inresolving small details with both types of materials isreported (Fidopiastis Furhman Meyer amp Rolland2005)

Finally it is important to note that the FOV of theHMPD is invariant with the location of the material inthe environment and the apparent size of a 3D objectobtained from generating stereoscopic images will alsobe invariant with the location of the user with respect tothe material as long as the head of the user is tracked inposition and orientation as in any AR application

33 Optical Design of HMPDs

The optical design of any HMD including theHMPD is highly driven by the choice of the microdis-plays specifically their size resolution and means ofself-emitting light or requiring illumination optics Thesmaller the microdisplays the higher the required powerof the optics to achieve a given FOV and thus thehigher the number of elements required

The microdisplays and associated electronics firstavailable for this project were 135 in diagonal back-lighting color AM-LCDs (active matrix liquid crystaldisplays) with (640 3) 480 pixels and 42-m pixelsize While higher resolution would be preferred theavailability in size and color of this microdisplay weredeterminant for the choice made A 52deg FOV optics pereye with a 35-mm focal length optics was designed inorder to maintain a visual resolution of less than about 4arc minutes This choice resulted in a predicted 41 arcminutes per pixel in angular resolution horizontally andvertically (Hua Ha amp Rolland 2003) In spite of thelower resolution imposed by the microdisplay largerFOVs optics were explored For example a 70deg FOVoptics was investigated (Ha amp Rolland 2004) togetherwith a discussion of the properties of such optics (Rol-land Biocca et al 2004) Both designs were based onan ultralightweight four-element compact projectionoptics using a combination of DOEs plastic compo-nents and aspheric surfaces While plastic componentsare ideal to design an ultralight system its combinationwith glass components and DOEs also enables higher

image quality The total weight of each lens assemblywas only 6 g The mechanical dimensions of the 52degand 70deg FOVs optics were 20-mm in length by 18-mmin diameter and 15-mm in length by 134 mm in diame-ter respectively For both designs an analysis of perfor-mance determined that the polychromatic modulationtransfer functions displayed more than 40 contrast at25 line-pairsmm for a full size pupil of 12 mm Thedistortion was constrained to be less than 25 acrossthe overall visual fields in both cases

Finally whether the microdisplay is self-emitting orrequires illumination optics may impose additional con-straints on the design and compactness If the microdis-play acts as a mirror such as liquid crystal on silicon(LCOS) displays (Huang et al 2002) the projectionoptics diameter will be larger than the microdisplay toavoid vignetting the footprint of the telecentric lightbeam reflected off the LCOS as it passes through thelens Telecentric means that the central ray of the coneof light from each point in the field of view is parallel tothe optical axis before the LCOS display Furthermoresuch microdisplays require illumination optics and thusthey require additional optical and opto-mechanicalcomponents that will add weight complexity and costto the overall system Advantages of LCOS displays to-day however are their physical size (ie 1 in diago-nal) their brightness and their resolution

For each HMPD design (and this also applies toHMD design in general) the choice of the microdisplayis critical to the success of a final prototype or productand as such a choice must be application driven if theprototype or product are targeted at a real-world appli-cation

4 Design of HMPD Technologies forSpecific Applications The TeleportalHMDP (T-HMPD) and the Mobile HMPD(M-HMPD)

Nonverbal cues regarding what others are thinkingand where they are looking are key sources of informa-tion during real-time collaboration among workmates

536 PRESENCE VOLUME 14 NUMBER 5

especially when movements must be coordinated as incollaborative object manipulation The direction ofanotherrsquos gaze is a key source of information as to theotherrsquos visual attention For example it helps disambig-uate spatially ambiguous but common everyday phrasessuch as ldquoget the tool over thererdquo Facial expressions sup-plemented by language provide teammates with insightto the intentions moods and meanings communicatedby others

Video conferencing systems provide some informa-tion on visual expressions but fail to provide accuratecues of spatial attention and are poor at supportingphysical collaboration because collaborators lack a com-mon action space VR systems using HMPD displayscan better create a common action space and supportnaturalistic hand manipulation of 3D virtual objectsduring immersive collaboration but facial expressionsare partially occluded by the HMPD for local partici-pants and further masked for remote participants Akey challenge in immersive collaborative systems ishow to add the important information channels offacial expressions and visual attention into a distrib-uted AR interface An emerging version of theHMPD design tailored for face-to-face interactionand referred to as the teleportal HMPD (T-HMPD)will be described in Section 41

Another challenge in creating distributed collabora-tive environments is how to create mobile systems basedon HMPD technology given that collaboration mayalso take place as we navigate through a real environ-ment such as a museum or a city In such cases it is notpossible to position retro-reflective material strategicallyin the environment however it would be advantageousbased on the ultralightweight of the optics of HMPDsto expand the technology to mobile systems A mobileHMPD (M-HMPD) will be described in Section 42

41 The Teleportal HMPD (T-HMPD)

The T-HMPD integrates optical image process-ing and display technologies to capture transport andreconstruct an AR 3D model of the head and facial ex-

pression of a remote collaborator networked to a localpartner The T-HMPD is a hybrid optical and videosee-through HMD composed of an HMPD combinedwith a pair of lipstick video cameras and two miniaturemirrors mounted to the side and slightly forward of theface of the user as shown schematically in Figure 5(Biocca amp Rolland 2004) The configuration cap-tures stereoscopic video of the userrsquos face includingboth sides of the face without occlusion with mini-mal interference within the userrsquos visual field and onlyminimal occlusion of the face as viewed by otherphysical participants at the local site Unlike room-basedvideo the head-worn camera and mirror system cap-tures the full face of users no matter where they arelooking and regardless of their locations as long as thecameras are connected

Figure 6ab show the left and right views respec-tively of the lipstick cameras through the miniaturemirrors (ie 1 in in diameter in this case) in one of ourfirst tests The radius of curvature of the convex surfaceleading to the face capture shown was selected to be65-mm from applying basic optical imaging equationsbetween a small 4 mm focal length lipstick camera andthe face In the first implementation the lipstick videocameras were Sony Electronics DXCLS11 Adjustablerods were designed to mount the two mirrors in orderto experiment with various configurations of mirrorscamera lenses and distances from the face and the two

Figure 5 General layout of the minimally intrusive stereoscopic face

capture system

Rolland et al 537

mirrors Optimization of the optical face capture systemis under optimization for robustness minimization ofshadows on the face and placement of the mirror sideof the beam splitter to capture the face of any partici-

pant with no interference from the projector light or thebeam splitter Image processing algorithms under de-velopment at the MIND Lab (Media Interface and Net-work Design Lab) in collaboration with the ODA Lab

Figure 6 Face images captured using the FCS (a) left image and (b) right image (c) Virtual frontal view generated from two side view

cameras mounted in this case to a table as opposed to the head for feasibility study (Courtesy frontal view from C Reddy and G Stockman

Michigan State University)

538 PRESENCE VOLUME 14 NUMBER 5

(Optical Diagnostics and Applications Lab) unwrap thedistorted images from the cameras and produce a com-posite video texture from the two images (Reddy 2003Reddy Stockman Rolland amp Biocca 2004) A feasibil-ity of creating a frontal virtual view from two lipstickcameras mounted to the side and slightly forward of theface is shown in Figure 6c where the cameras in thiscase were mounted to a table The composite stereovideo texture can be sent directly via high bandwidthconnection or mapped to a 3D mesh of the userrsquos faceThe 3D face of the user can be projected in the localspace as a 3D AR object in the virtual environmentand placed in the analogous spatial relation to othersand virtual objects using the tracker data of the re-mote site Alternatively the teleportal 3D face-to-facemodel can be projected onto a head model covered inretro-reflective material or as a 3D video on the wallof an office or conference room as in traditional tele-conferencing systems As the algorithm for stereo facecapture and reconstruction matures we are preparingto test the algorithm in various presentation scenariosincluding a retro-reflective ball and head model andother embodiments of the remote others will be cre-ated In optimizing the presentation of informationto create the maximum sense of presence we mayinvestigate how to best display the remote faces Forexample we may employ a retro-reflective tabletopwhere 3D scientific visualization may be shared or aretro-reflective wall that would open a common win-dow to both distributed visualization and social envi-ronments

For local participants wearing HMPDs a participantwould see the eyes of the other participant illuminatedhowever the projector is too weak to shine light in theeyes of a side participant The illuminated eyes may beturned off automatically when two local participantsspeak face-to-face to each other (while their face is stillbeing captured) and the projector can be automaticallyturned back on when facing any retro-reflective mate-rial This feature has not yet been implemented Thereis no such issue for remote participants given how theface is captured

42 Mobile Head-Mounted ProjectionDisplay (M-HMPD)

In order to expand the use of HMPD to appli-cations that may not be able to allow placing of retro-reflective material in the environment a novel displaybased on the HMPD was recently conceived (RollandMartins amp Ha 2003) A schematic of the display isshown in Figure 7 The main novelty of the display liesin removing the fabric from the environment and solelyintegrating the retro-reflective material within theHMPD for imaging using additional optics between thebeam splitter and the material to optically image thematerial at a remote distance from the user A preferredlocation for the material is in coincidence with the mon-ocular virtual images of the HMPD to minimize theeffects of diffraction imposed by the microstructures ofthe optical material Without the additional optics weestimated that in the best case scenario visual acuitywould have been limited to about 10 minutes of arceven with a finer resolution of the microdisplay whichwould be visually unacceptable Thus when the materialwithin the HMPD is not used simply for increased illu-mination the imaging optics between the integratedmaterial and the beam splitter is absolutely required inorder to provide adequate overall resolution of theviewed images The additional optics is illustrated in

Figure 7 Conceptual design of a mobile HMPD (M-HMPD)

Rolland et al 539

Figure 7 with a Fresnel lens for low weight and com-pactness

Because the image formed by the projection optics isminimized at the location of the retro-reflective materialplaced within the M-HMPD a difference with the basicHMPD is that smaller microstructures are required (ieon the order of 10 m instead of 100 m) The de-tailed optical design of the lens is similar to that of thefirst ODA Lab HMPD optics except that the microdis-play chosen is a 06 in diagonal OLED and higher res-olution 800 600 pixels Details of the optical designwere reported elsewhere (Martins Ha amp Rolland2003) A challenge associated with the development ofthe M-HMPD is the design and fabrication of about10 m scale microstructure retro-reflective materialSuch microstructures are not commercially available andcustom-design materials are being investigated

Because of its stand-alone capability the M-HMPDextends the use of HMPDs to clinical guided surgerymedical simulation and training wearable computersmobile secure displays and distributed collaborative dis-plays as well as outdoor augmented see-through virtualenvironments The visual results of the M-HMPD com-pare in essence to that of eyepiece-based see-throughAR systems currently available The difference lieshowever in the higher image quality (ie lower blur) adistortion-free image and a more compact optics

5 A Review of CollaborativeApplications Medical Applicationsand Infospaces

The HMPD facilitates the development of collab-orative environments that allow seamless transitionsthrough different levels of immersion from AR to a fullvirtual reality experience (Milgram amp Kishino 1994Davis et al 2003) In the artificial reality center (ARC)users can navigate between various levels of immersionthat occur on the basis of where users position them-selves with respect to the retro-reflective material TheARC presented at ISMAR 2002 (Hamza-Lup et al2002) together with a remote collaboration applicationbuilt on top of DARE (distributed augmented realityenvironment) (Hamza-Lup Davis Hughes amp Rolland2002) consists of a generally curved or shaped retro-reflective wall an HMPD with miniature optics a com-mercially available optical tracking system and Linux-based PCs The ARCs may take different shapes andsizes and are importantly quickly deployable either in-doors or outdoors Since the year 2001 we built twodeployable (10 minute setup) displays 10-ft wide by7-ft high as well as a deployable (30 minute setup)15-ft diameter center networked to some of the otherARCs as shown in Figure 8 However all displays aim toprovide multiuser capability including remotely located

Figure 8 Networked open environments with artificial reality centers (NOEs ARCs)

540 PRESENCE VOLUME 14 NUMBER 5

users (Hamza-Lup amp Rolland 2004b) as well as 3Dmultisensory visualization including 3D sound hapticand smell (Rolland Stanney et al 2004) The userrsquosmotion becomes the computer interface device in con-trast to systems that may resort to various display de-vices (Billinghurst Kato amp Poupyrev 2001)

Variants of the HMPD optics targeted at specificapplications such as 3D medical visualization and dis-tributed AR medical training tools (Rolland Hamza-Lup et al 2003 Hamza-Lup Rolland amp Hughes2005) embedded training display technology for theArmyrsquos future combat vehicles (Rodrigez Foglia ampRolland 2003) 3D manufacturing design collabora-tion (Rolland Biocca et al 2004) have been devel-oped in our laboratory Also the HMPD in its origi-nal prototype form with a 52deg ultralightweight opticslies at the core of the Aztec Explorer application de-veloped at the 3DVIS Lab that investigates variousinteraction schemes within SCAPE (stereoscopic col-laboration in augmented and projective environ-ments) that consists of an integration of the HMPDtogether with the HiBall3000 head tracker by 3rdTech (www3rdtechcom) a SCAPE API developedin the 3DVIS Lab the CAVERN G2 API networkinglibrary (wwwopenchanelsoftwareorg) and a tracked5DT Dataglove (Hua Brown amp Gao 2004)

In summary the HMPD technology we have devel-oped currently provides a fine balance of affordabilityand unique capabilities such as (1) spanning the virtual-ity continuum allowing both full immersion and mixedreality which may open a set of new capabilities acrossvarious applications (2) enabling teleportal capabilitywith face-to-face interaction (3) creating ultralight-weight wide FOVs mobile and secure displays (4) creat-ing small low cost desktop technology or on largerscale quickly deployable 3D visualization centers or dis-plays

51 Medical Visualization andAugmented Reality Medical TrainingTools

In the ARCs multiple users may interact on thevisualization of 3D medical models as shown in Figure9 or practice procedures on AR human patient simula-tors (HPS) with a teaching module referred to as theultimate intubation head (UIH) as shown in Figure 10which we shall now further detail Because as detailedearlier in the paper the light projected from one userreturns only to that user there is no cross talk betweenusers and thus multiple users can coexist with no over-lapping graphics in the ARCs

Figure 9 Concept of multiusers interacting in the ARC Figure 10 Illustration of the endotracheal intubation training tool

Rolland et al 541

The UIH is a training tool in development in theODA Lab for endotracheal intubation (ETI) based onHMPD and ARC technology (Rolland Hamza-Lup etal 2003) (Hamza-Lup Rolland amp Hughes 2005) Itis aimed at medical students residents physician assis-tants pre-hospital care personnel nurse-anesthetistsexperienced physicians and any medical personnel whoneed to perform this common but critical procedure ina safe and rapid sequence

The system trains a wide range of clinicians in safelysecuring the airway during cardiopulmonary resuscita-tion as ensuring immediate ventilation andor oxygen-ation is critical for a number of reasons Firstly ETIwhich consists of inserting an endotracheal tubethrough the mouth into the trachea and then sealingthe trachea so that all air passes through the tube is alifesaving procedure Secondly the need for ETI canoccur in many places in and out of the hospital Per-haps the most important reason for training clinicians inETI however is the inherent difficulty associated withthe procedure (American Heart Association 1992Walls Barton amp McAfee 1999)

Current teaching methods lack flexibility in more

than one sense The most widely used model is a plasticor latex mannequin commonly used to teach advancedcardiac life support (ACLS) techniques including air-way management The neck and oropharynx are usuallydifficult to manipulate without inadvertently ldquobreakingrdquothe modelrsquos teeth or ldquodislocatingrdquo the cervical spinebecause of the awkward hand motions required A rela-tively recent development is the HPS a mannequin-based simulator The HPS is similar to the existingACLS models but the neck and airway are often moreflexible and lifelike and can be made to deform and re-lax to simulate real scenarios The HPS can simulateheart and lung sounds and provide palpable pulses aswell as realistic chest movement The simulator is inter-active but requires real-time programming and feed-back from an instructor (Murray amp Schneider 1997)Utilizing a HPS combined with 3D AR visualization ofthe airway anatomy and the endotracheal tube para-medics will be able to obtain a visual and tactile sense ofproper ETI The UIH will allow paramedics to practicetheir skills and provide them with the visual feedbackthey could not obtain otherwise

Intubation on the HPS is shown in Figure 11a The

Figure 11 (a) A user training on an intubation procedure (b) Lung model superimposed on the HPS using AR (3D model of the Visible

Human Dataset lung Courtesy of Celina Imielinska Columbia University)

542 PRESENCE VOLUME 14 NUMBER 5

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 5: Development of Head-Mounted - University of Central Florida

to allow full peripheral vision Also because of the ex-tremely compact electronics of the microdisplay in thiscase the overall system is less than 600 g including ca-bles and can further be lightened past this early proto-type level as metallic structures still remain Having in-

troduced the evolution of these systems we considerthe optics of these systems in greater detail in the fol-lowing section because the optical design is critical totheir future development making HMPDs light brightwide FOV wearable and mobile

Figure 1 Steady progress toward engineering lightweight HMPDs (a) 2001 HMPDampFaceCapture Optics mount Top-mounted microdisplay

AM-LCD Pixel resolution (640 3) 480 (b) 2003 HMPD Optics mount Side-mounted microdisplay AM-LCD Pixel resolution (640

3) 480 (c) 2004 HMPD Optics mount Top-mounted microdisplay OLED Pixel resolution (800 3) 600

532 PRESENCE VOLUME 14 NUMBER 5

3 Optics of the Head-Mounted ProjectionDisplay (HMPD) Making the DisplaysLight Bright Wide FOV Wearable andMobile

The optics of the HMPD consists essentially of(1) two microdisplays one associated with each eye and(2) associated projection optics that guides the pro-jected images toward retro-reflective material The opti-cal property of such material is to retro-reflect light to-ward its direction of origin or equivalently the eyes ofthe user in our optical configuration given that the po-sition of the eyes of the user are made conjugate to theposition of the exit pupils of the optics via a beam split-ter as shown in Figure 2b Therefore there are twounique optical components (1) projection optics ratherthan eyepiece optics as used in conventional HMDsand (2) a retro-reflective material strategically placed inthe environment rather than a diffusing screen as typi-cally used in projection-based systems distinguish theHMPD technology from conventional HMDs and ste-reoscopic projection displays such as the CAVE

31 Projection Optics vs EyepieceOptics Lighter Greater Field-of-Viewand Less Distortion

A comparison of HMDs based on eyepiece opticsversus the projection optics of a HMPD is shown inFigure 2ab An important feature of eyepiece optics in aHMD is the propagation of the light solely within theHMD between the microdisplay and the eye In thecase of projection optics in an HMPD the light actuallypropagates in the real environment up to the retro-reflective material before returning to the userrsquos eyesThe implication of this propagation scheme makes fora fundamental difference in the user experience of 3Dobjects and the environment With the HMPD it ispossible to have a user occlude virtual objects withreal objects interposed between the HMPD and thematerial such as the hand of a user reaching out to avirtual object in an artificial reality center (ARC) dis-play as shown in Figure 3

The usage of projection optics allows for larger FOV(ie 40deg diagonal) and less optical distortion (25

Figure 2 (a) Eyepiece optics HMD light emitted from the microdisplay reaches directly the eyes via the orientation of the

beam splitter (b) Projection optics HMD referred to as HMPD light is first sent to the environment before being retro-reflected

toward the eyes of the user a consequence of the orientation of the beam splitter and the positioning of retro-reflective material

in the environment

Rolland et al 533

at the corner of the FOV) than obtained with conven-tional eyepiece-based optical see-through HMDs for anequivalent weight At the foundation of this capability isthe location of the pupil both within the projection op-tics and after the beam splitter

In conventional HMDs using eyepiece optics onemay distinguish between pupil forming and non-pupilforming systems By pupil forming we mean that a dia-phragm (ie an aperture stop more generally called apupil that limits the light passing through the system) islocated within the optical system and is being reimagedat the eye location The eye location is referred to as theeyepoint because in computer graphics a pinhole modelis used to generate the stereoscopic image pairs In thecase of a pupil forming eyepiece an aperture stop is thuslocated within the optics however the final image ofthis stop after the eyepiece and beam splitter is real andlocated outside the optics at the chosen eyepoint forthe generation of the stereoscopic images (Rolland Ha

amp Fidopiastis 2004) The main drawback of an externalexit pupil is that as the FOV increases the optics in-creases in diameter and thus the weight increases as thecube of the diameter For the non-pupil forming eye-piece the pupil (ie the image of the eye iris throughthe cornea) of the eye itself constitutes the limiting ap-erture stop of the eyepiece However the same limita-tion occurs regarding the trade-off in FOV versusweight An advantage of non-pupil forming opticshowever is that the user may benefit from a larger eyebox where eye movements may more freely occur with-out vignetting of the image (ie vignetting refers topartial light loss or full loss of the image)

Projection optics is intrinsically a pupil forming opti-cal system In the case of projection optics an aperturestop is located within the projection lens by design andthe image of this aperture stop known as the exit pupilis virtual (ie it is located to the left of the last surfaceof the projection optics shown schematically in Figure2b) However given the orientation of the beam splitterat 90deg from that used with eyepiece optics the final im-age of the exit pupil after the beam splitter is coincidentwith the eyepoint of the eye as required In the case of avirtual pupil however as the FOV increases the opticssize remains almost invariant Furthermore in the caseof projection optics where the final pupil is virtual it isquite straightforward to design the optics with the pupillocated at or close to the nodal points of the lens (iemathematical first order constructs with unit angularmagnification) In such a case there will be little or nodistortion Correcting for distortion eliminates the needfor real-time distortion correction with software orhardware This property holds for HMPD design withrelatively large FOVs While optical correction is cur-rently readily available for various hardware solutionseliminating the need for distortion correction not onlyminimizes the cost of the system but also eliminatesprocessing Therefore there are no additional systemdelays Furthermore distortion-free optics avoids deteri-oration in image quality that can become more pro-nounced with increasing amounts of required correc-tion The fact that the microdisplay pixels remainsquare regardless of the level of predistortion compen-

Figure 3 Demonstration of occlusion capability of virtual objects by

real objects User reaches out to the trachea in the ARC display and

occludes part of the object with his hand A grayscale image is shown

here however the display is full color (Mandible and Trachea Visible

Human Dataset 3D Models Courtesy of Celina Imielinska Columbia

University)

534 PRESENCE VOLUME 14 NUMBER 5

sation of the rendered image may cause visual discom-fort if the pixels are resolvable as commonly encoun-tered in HMDs Thus distortion-free optics even todaywith high speed processing presents key advantages tomore natural perception in generally speaking HMDs

32 The Retro-Reflective ScreenAny Shape in Any Location

A key property of retro-reflective material is thatrays hitting the surface at any angle are reflected backonto the same incident optical path Thus theoreticallythe perception of the image is independent of the shapeand location of the screen made of such material Suchtechnology can thus be implemented with curved ortilted screens with no apparent distortion In practicedepending on the specifics of the retro-reflective mate-rial some dependence on the shape may be observedoutside a range of bending of the material Also imagequality (ie the sharpness of the image) is in practicelimited by the optical diffraction of the material (Mar-tins amp Rolland 2003) whose impact can be highly sig-nificant for large discrepancies in the location of the ma-terial with respect to the optical image projected by the

optics Thus ideally to minimize the effect of diffrac-tion the material must be placed in close proximity tothe images focused by the projector Furthermore de-pending on the specific properties of the material theamount of blurring quantified as the width of the pointspread function (PSF) may be more or less pronouncedfor the same discrepancy in the location of the projectedimages and the material as now quantified

An analysis of diffraction was conducted on two typesof material a micro-beaded type of material (ieScotchlite 3M Fabric Silver-beaded) and a microcorner-cube type of material (ie Scotchlite 3M FilmSilver-cubed) both shown on a microscopic scale inFigure 4b1 The analysis shown in Figure 4a quantifiesthe spread of light after retro-reflection due to diffrac-tion Measurements made in the laboratory and alsoreported in Figure 4b indicate a good correlation in theresults with the mathematical predictions From thisanalysis it is shown that the micro corner-cube materialwill be superior in maximizing the light throughput andminimizing loss in resolution due to diffraction In a

1 Scotchlite is a registered trademark of the 3M company

Figure 4 (a) Theoretical modeling of the point spread function (PSF) for two kinds of retro-reflective materials (b) 3D

(xyI) measured point spread functions a microscopic view of the two different types of materials is shown above the

PSFs as well as their basic 3D microstructure

Rolland et al 535

companion paper an analysis of human performance inresolving small details with both types of materials isreported (Fidopiastis Furhman Meyer amp Rolland2005)

Finally it is important to note that the FOV of theHMPD is invariant with the location of the material inthe environment and the apparent size of a 3D objectobtained from generating stereoscopic images will alsobe invariant with the location of the user with respect tothe material as long as the head of the user is tracked inposition and orientation as in any AR application

33 Optical Design of HMPDs

The optical design of any HMD including theHMPD is highly driven by the choice of the microdis-plays specifically their size resolution and means ofself-emitting light or requiring illumination optics Thesmaller the microdisplays the higher the required powerof the optics to achieve a given FOV and thus thehigher the number of elements required

The microdisplays and associated electronics firstavailable for this project were 135 in diagonal back-lighting color AM-LCDs (active matrix liquid crystaldisplays) with (640 3) 480 pixels and 42-m pixelsize While higher resolution would be preferred theavailability in size and color of this microdisplay weredeterminant for the choice made A 52deg FOV optics pereye with a 35-mm focal length optics was designed inorder to maintain a visual resolution of less than about 4arc minutes This choice resulted in a predicted 41 arcminutes per pixel in angular resolution horizontally andvertically (Hua Ha amp Rolland 2003) In spite of thelower resolution imposed by the microdisplay largerFOVs optics were explored For example a 70deg FOVoptics was investigated (Ha amp Rolland 2004) togetherwith a discussion of the properties of such optics (Rol-land Biocca et al 2004) Both designs were based onan ultralightweight four-element compact projectionoptics using a combination of DOEs plastic compo-nents and aspheric surfaces While plastic componentsare ideal to design an ultralight system its combinationwith glass components and DOEs also enables higher

image quality The total weight of each lens assemblywas only 6 g The mechanical dimensions of the 52degand 70deg FOVs optics were 20-mm in length by 18-mmin diameter and 15-mm in length by 134 mm in diame-ter respectively For both designs an analysis of perfor-mance determined that the polychromatic modulationtransfer functions displayed more than 40 contrast at25 line-pairsmm for a full size pupil of 12 mm Thedistortion was constrained to be less than 25 acrossthe overall visual fields in both cases

Finally whether the microdisplay is self-emitting orrequires illumination optics may impose additional con-straints on the design and compactness If the microdis-play acts as a mirror such as liquid crystal on silicon(LCOS) displays (Huang et al 2002) the projectionoptics diameter will be larger than the microdisplay toavoid vignetting the footprint of the telecentric lightbeam reflected off the LCOS as it passes through thelens Telecentric means that the central ray of the coneof light from each point in the field of view is parallel tothe optical axis before the LCOS display Furthermoresuch microdisplays require illumination optics and thusthey require additional optical and opto-mechanicalcomponents that will add weight complexity and costto the overall system Advantages of LCOS displays to-day however are their physical size (ie 1 in diago-nal) their brightness and their resolution

For each HMPD design (and this also applies toHMD design in general) the choice of the microdisplayis critical to the success of a final prototype or productand as such a choice must be application driven if theprototype or product are targeted at a real-world appli-cation

4 Design of HMPD Technologies forSpecific Applications The TeleportalHMDP (T-HMPD) and the Mobile HMPD(M-HMPD)

Nonverbal cues regarding what others are thinkingand where they are looking are key sources of informa-tion during real-time collaboration among workmates

536 PRESENCE VOLUME 14 NUMBER 5

especially when movements must be coordinated as incollaborative object manipulation The direction ofanotherrsquos gaze is a key source of information as to theotherrsquos visual attention For example it helps disambig-uate spatially ambiguous but common everyday phrasessuch as ldquoget the tool over thererdquo Facial expressions sup-plemented by language provide teammates with insightto the intentions moods and meanings communicatedby others

Video conferencing systems provide some informa-tion on visual expressions but fail to provide accuratecues of spatial attention and are poor at supportingphysical collaboration because collaborators lack a com-mon action space VR systems using HMPD displayscan better create a common action space and supportnaturalistic hand manipulation of 3D virtual objectsduring immersive collaboration but facial expressionsare partially occluded by the HMPD for local partici-pants and further masked for remote participants Akey challenge in immersive collaborative systems ishow to add the important information channels offacial expressions and visual attention into a distrib-uted AR interface An emerging version of theHMPD design tailored for face-to-face interactionand referred to as the teleportal HMPD (T-HMPD)will be described in Section 41

Another challenge in creating distributed collabora-tive environments is how to create mobile systems basedon HMPD technology given that collaboration mayalso take place as we navigate through a real environ-ment such as a museum or a city In such cases it is notpossible to position retro-reflective material strategicallyin the environment however it would be advantageousbased on the ultralightweight of the optics of HMPDsto expand the technology to mobile systems A mobileHMPD (M-HMPD) will be described in Section 42

41 The Teleportal HMPD (T-HMPD)

The T-HMPD integrates optical image process-ing and display technologies to capture transport andreconstruct an AR 3D model of the head and facial ex-

pression of a remote collaborator networked to a localpartner The T-HMPD is a hybrid optical and videosee-through HMD composed of an HMPD combinedwith a pair of lipstick video cameras and two miniaturemirrors mounted to the side and slightly forward of theface of the user as shown schematically in Figure 5(Biocca amp Rolland 2004) The configuration cap-tures stereoscopic video of the userrsquos face includingboth sides of the face without occlusion with mini-mal interference within the userrsquos visual field and onlyminimal occlusion of the face as viewed by otherphysical participants at the local site Unlike room-basedvideo the head-worn camera and mirror system cap-tures the full face of users no matter where they arelooking and regardless of their locations as long as thecameras are connected

Figure 6ab show the left and right views respec-tively of the lipstick cameras through the miniaturemirrors (ie 1 in in diameter in this case) in one of ourfirst tests The radius of curvature of the convex surfaceleading to the face capture shown was selected to be65-mm from applying basic optical imaging equationsbetween a small 4 mm focal length lipstick camera andthe face In the first implementation the lipstick videocameras were Sony Electronics DXCLS11 Adjustablerods were designed to mount the two mirrors in orderto experiment with various configurations of mirrorscamera lenses and distances from the face and the two

Figure 5 General layout of the minimally intrusive stereoscopic face

capture system

Rolland et al 537

mirrors Optimization of the optical face capture systemis under optimization for robustness minimization ofshadows on the face and placement of the mirror sideof the beam splitter to capture the face of any partici-

pant with no interference from the projector light or thebeam splitter Image processing algorithms under de-velopment at the MIND Lab (Media Interface and Net-work Design Lab) in collaboration with the ODA Lab

Figure 6 Face images captured using the FCS (a) left image and (b) right image (c) Virtual frontal view generated from two side view

cameras mounted in this case to a table as opposed to the head for feasibility study (Courtesy frontal view from C Reddy and G Stockman

Michigan State University)

538 PRESENCE VOLUME 14 NUMBER 5

(Optical Diagnostics and Applications Lab) unwrap thedistorted images from the cameras and produce a com-posite video texture from the two images (Reddy 2003Reddy Stockman Rolland amp Biocca 2004) A feasibil-ity of creating a frontal virtual view from two lipstickcameras mounted to the side and slightly forward of theface is shown in Figure 6c where the cameras in thiscase were mounted to a table The composite stereovideo texture can be sent directly via high bandwidthconnection or mapped to a 3D mesh of the userrsquos faceThe 3D face of the user can be projected in the localspace as a 3D AR object in the virtual environmentand placed in the analogous spatial relation to othersand virtual objects using the tracker data of the re-mote site Alternatively the teleportal 3D face-to-facemodel can be projected onto a head model covered inretro-reflective material or as a 3D video on the wallof an office or conference room as in traditional tele-conferencing systems As the algorithm for stereo facecapture and reconstruction matures we are preparingto test the algorithm in various presentation scenariosincluding a retro-reflective ball and head model andother embodiments of the remote others will be cre-ated In optimizing the presentation of informationto create the maximum sense of presence we mayinvestigate how to best display the remote faces Forexample we may employ a retro-reflective tabletopwhere 3D scientific visualization may be shared or aretro-reflective wall that would open a common win-dow to both distributed visualization and social envi-ronments

For local participants wearing HMPDs a participantwould see the eyes of the other participant illuminatedhowever the projector is too weak to shine light in theeyes of a side participant The illuminated eyes may beturned off automatically when two local participantsspeak face-to-face to each other (while their face is stillbeing captured) and the projector can be automaticallyturned back on when facing any retro-reflective mate-rial This feature has not yet been implemented Thereis no such issue for remote participants given how theface is captured

42 Mobile Head-Mounted ProjectionDisplay (M-HMPD)

In order to expand the use of HMPD to appli-cations that may not be able to allow placing of retro-reflective material in the environment a novel displaybased on the HMPD was recently conceived (RollandMartins amp Ha 2003) A schematic of the display isshown in Figure 7 The main novelty of the display liesin removing the fabric from the environment and solelyintegrating the retro-reflective material within theHMPD for imaging using additional optics between thebeam splitter and the material to optically image thematerial at a remote distance from the user A preferredlocation for the material is in coincidence with the mon-ocular virtual images of the HMPD to minimize theeffects of diffraction imposed by the microstructures ofthe optical material Without the additional optics weestimated that in the best case scenario visual acuitywould have been limited to about 10 minutes of arceven with a finer resolution of the microdisplay whichwould be visually unacceptable Thus when the materialwithin the HMPD is not used simply for increased illu-mination the imaging optics between the integratedmaterial and the beam splitter is absolutely required inorder to provide adequate overall resolution of theviewed images The additional optics is illustrated in

Figure 7 Conceptual design of a mobile HMPD (M-HMPD)

Rolland et al 539

Figure 7 with a Fresnel lens for low weight and com-pactness

Because the image formed by the projection optics isminimized at the location of the retro-reflective materialplaced within the M-HMPD a difference with the basicHMPD is that smaller microstructures are required (ieon the order of 10 m instead of 100 m) The de-tailed optical design of the lens is similar to that of thefirst ODA Lab HMPD optics except that the microdis-play chosen is a 06 in diagonal OLED and higher res-olution 800 600 pixels Details of the optical designwere reported elsewhere (Martins Ha amp Rolland2003) A challenge associated with the development ofthe M-HMPD is the design and fabrication of about10 m scale microstructure retro-reflective materialSuch microstructures are not commercially available andcustom-design materials are being investigated

Because of its stand-alone capability the M-HMPDextends the use of HMPDs to clinical guided surgerymedical simulation and training wearable computersmobile secure displays and distributed collaborative dis-plays as well as outdoor augmented see-through virtualenvironments The visual results of the M-HMPD com-pare in essence to that of eyepiece-based see-throughAR systems currently available The difference lieshowever in the higher image quality (ie lower blur) adistortion-free image and a more compact optics

5 A Review of CollaborativeApplications Medical Applicationsand Infospaces

The HMPD facilitates the development of collab-orative environments that allow seamless transitionsthrough different levels of immersion from AR to a fullvirtual reality experience (Milgram amp Kishino 1994Davis et al 2003) In the artificial reality center (ARC)users can navigate between various levels of immersionthat occur on the basis of where users position them-selves with respect to the retro-reflective material TheARC presented at ISMAR 2002 (Hamza-Lup et al2002) together with a remote collaboration applicationbuilt on top of DARE (distributed augmented realityenvironment) (Hamza-Lup Davis Hughes amp Rolland2002) consists of a generally curved or shaped retro-reflective wall an HMPD with miniature optics a com-mercially available optical tracking system and Linux-based PCs The ARCs may take different shapes andsizes and are importantly quickly deployable either in-doors or outdoors Since the year 2001 we built twodeployable (10 minute setup) displays 10-ft wide by7-ft high as well as a deployable (30 minute setup)15-ft diameter center networked to some of the otherARCs as shown in Figure 8 However all displays aim toprovide multiuser capability including remotely located

Figure 8 Networked open environments with artificial reality centers (NOEs ARCs)

540 PRESENCE VOLUME 14 NUMBER 5

users (Hamza-Lup amp Rolland 2004b) as well as 3Dmultisensory visualization including 3D sound hapticand smell (Rolland Stanney et al 2004) The userrsquosmotion becomes the computer interface device in con-trast to systems that may resort to various display de-vices (Billinghurst Kato amp Poupyrev 2001)

Variants of the HMPD optics targeted at specificapplications such as 3D medical visualization and dis-tributed AR medical training tools (Rolland Hamza-Lup et al 2003 Hamza-Lup Rolland amp Hughes2005) embedded training display technology for theArmyrsquos future combat vehicles (Rodrigez Foglia ampRolland 2003) 3D manufacturing design collabora-tion (Rolland Biocca et al 2004) have been devel-oped in our laboratory Also the HMPD in its origi-nal prototype form with a 52deg ultralightweight opticslies at the core of the Aztec Explorer application de-veloped at the 3DVIS Lab that investigates variousinteraction schemes within SCAPE (stereoscopic col-laboration in augmented and projective environ-ments) that consists of an integration of the HMPDtogether with the HiBall3000 head tracker by 3rdTech (www3rdtechcom) a SCAPE API developedin the 3DVIS Lab the CAVERN G2 API networkinglibrary (wwwopenchanelsoftwareorg) and a tracked5DT Dataglove (Hua Brown amp Gao 2004)

In summary the HMPD technology we have devel-oped currently provides a fine balance of affordabilityand unique capabilities such as (1) spanning the virtual-ity continuum allowing both full immersion and mixedreality which may open a set of new capabilities acrossvarious applications (2) enabling teleportal capabilitywith face-to-face interaction (3) creating ultralight-weight wide FOVs mobile and secure displays (4) creat-ing small low cost desktop technology or on largerscale quickly deployable 3D visualization centers or dis-plays

51 Medical Visualization andAugmented Reality Medical TrainingTools

In the ARCs multiple users may interact on thevisualization of 3D medical models as shown in Figure9 or practice procedures on AR human patient simula-tors (HPS) with a teaching module referred to as theultimate intubation head (UIH) as shown in Figure 10which we shall now further detail Because as detailedearlier in the paper the light projected from one userreturns only to that user there is no cross talk betweenusers and thus multiple users can coexist with no over-lapping graphics in the ARCs

Figure 9 Concept of multiusers interacting in the ARC Figure 10 Illustration of the endotracheal intubation training tool

Rolland et al 541

The UIH is a training tool in development in theODA Lab for endotracheal intubation (ETI) based onHMPD and ARC technology (Rolland Hamza-Lup etal 2003) (Hamza-Lup Rolland amp Hughes 2005) Itis aimed at medical students residents physician assis-tants pre-hospital care personnel nurse-anesthetistsexperienced physicians and any medical personnel whoneed to perform this common but critical procedure ina safe and rapid sequence

The system trains a wide range of clinicians in safelysecuring the airway during cardiopulmonary resuscita-tion as ensuring immediate ventilation andor oxygen-ation is critical for a number of reasons Firstly ETIwhich consists of inserting an endotracheal tubethrough the mouth into the trachea and then sealingthe trachea so that all air passes through the tube is alifesaving procedure Secondly the need for ETI canoccur in many places in and out of the hospital Per-haps the most important reason for training clinicians inETI however is the inherent difficulty associated withthe procedure (American Heart Association 1992Walls Barton amp McAfee 1999)

Current teaching methods lack flexibility in more

than one sense The most widely used model is a plasticor latex mannequin commonly used to teach advancedcardiac life support (ACLS) techniques including air-way management The neck and oropharynx are usuallydifficult to manipulate without inadvertently ldquobreakingrdquothe modelrsquos teeth or ldquodislocatingrdquo the cervical spinebecause of the awkward hand motions required A rela-tively recent development is the HPS a mannequin-based simulator The HPS is similar to the existingACLS models but the neck and airway are often moreflexible and lifelike and can be made to deform and re-lax to simulate real scenarios The HPS can simulateheart and lung sounds and provide palpable pulses aswell as realistic chest movement The simulator is inter-active but requires real-time programming and feed-back from an instructor (Murray amp Schneider 1997)Utilizing a HPS combined with 3D AR visualization ofthe airway anatomy and the endotracheal tube para-medics will be able to obtain a visual and tactile sense ofproper ETI The UIH will allow paramedics to practicetheir skills and provide them with the visual feedbackthey could not obtain otherwise

Intubation on the HPS is shown in Figure 11a The

Figure 11 (a) A user training on an intubation procedure (b) Lung model superimposed on the HPS using AR (3D model of the Visible

Human Dataset lung Courtesy of Celina Imielinska Columbia University)

542 PRESENCE VOLUME 14 NUMBER 5

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 6: Development of Head-Mounted - University of Central Florida

3 Optics of the Head-Mounted ProjectionDisplay (HMPD) Making the DisplaysLight Bright Wide FOV Wearable andMobile

The optics of the HMPD consists essentially of(1) two microdisplays one associated with each eye and(2) associated projection optics that guides the pro-jected images toward retro-reflective material The opti-cal property of such material is to retro-reflect light to-ward its direction of origin or equivalently the eyes ofthe user in our optical configuration given that the po-sition of the eyes of the user are made conjugate to theposition of the exit pupils of the optics via a beam split-ter as shown in Figure 2b Therefore there are twounique optical components (1) projection optics ratherthan eyepiece optics as used in conventional HMDsand (2) a retro-reflective material strategically placed inthe environment rather than a diffusing screen as typi-cally used in projection-based systems distinguish theHMPD technology from conventional HMDs and ste-reoscopic projection displays such as the CAVE

31 Projection Optics vs EyepieceOptics Lighter Greater Field-of-Viewand Less Distortion

A comparison of HMDs based on eyepiece opticsversus the projection optics of a HMPD is shown inFigure 2ab An important feature of eyepiece optics in aHMD is the propagation of the light solely within theHMD between the microdisplay and the eye In thecase of projection optics in an HMPD the light actuallypropagates in the real environment up to the retro-reflective material before returning to the userrsquos eyesThe implication of this propagation scheme makes fora fundamental difference in the user experience of 3Dobjects and the environment With the HMPD it ispossible to have a user occlude virtual objects withreal objects interposed between the HMPD and thematerial such as the hand of a user reaching out to avirtual object in an artificial reality center (ARC) dis-play as shown in Figure 3

The usage of projection optics allows for larger FOV(ie 40deg diagonal) and less optical distortion (25

Figure 2 (a) Eyepiece optics HMD light emitted from the microdisplay reaches directly the eyes via the orientation of the

beam splitter (b) Projection optics HMD referred to as HMPD light is first sent to the environment before being retro-reflected

toward the eyes of the user a consequence of the orientation of the beam splitter and the positioning of retro-reflective material

in the environment

Rolland et al 533

at the corner of the FOV) than obtained with conven-tional eyepiece-based optical see-through HMDs for anequivalent weight At the foundation of this capability isthe location of the pupil both within the projection op-tics and after the beam splitter

In conventional HMDs using eyepiece optics onemay distinguish between pupil forming and non-pupilforming systems By pupil forming we mean that a dia-phragm (ie an aperture stop more generally called apupil that limits the light passing through the system) islocated within the optical system and is being reimagedat the eye location The eye location is referred to as theeyepoint because in computer graphics a pinhole modelis used to generate the stereoscopic image pairs In thecase of a pupil forming eyepiece an aperture stop is thuslocated within the optics however the final image ofthis stop after the eyepiece and beam splitter is real andlocated outside the optics at the chosen eyepoint forthe generation of the stereoscopic images (Rolland Ha

amp Fidopiastis 2004) The main drawback of an externalexit pupil is that as the FOV increases the optics in-creases in diameter and thus the weight increases as thecube of the diameter For the non-pupil forming eye-piece the pupil (ie the image of the eye iris throughthe cornea) of the eye itself constitutes the limiting ap-erture stop of the eyepiece However the same limita-tion occurs regarding the trade-off in FOV versusweight An advantage of non-pupil forming opticshowever is that the user may benefit from a larger eyebox where eye movements may more freely occur with-out vignetting of the image (ie vignetting refers topartial light loss or full loss of the image)

Projection optics is intrinsically a pupil forming opti-cal system In the case of projection optics an aperturestop is located within the projection lens by design andthe image of this aperture stop known as the exit pupilis virtual (ie it is located to the left of the last surfaceof the projection optics shown schematically in Figure2b) However given the orientation of the beam splitterat 90deg from that used with eyepiece optics the final im-age of the exit pupil after the beam splitter is coincidentwith the eyepoint of the eye as required In the case of avirtual pupil however as the FOV increases the opticssize remains almost invariant Furthermore in the caseof projection optics where the final pupil is virtual it isquite straightforward to design the optics with the pupillocated at or close to the nodal points of the lens (iemathematical first order constructs with unit angularmagnification) In such a case there will be little or nodistortion Correcting for distortion eliminates the needfor real-time distortion correction with software orhardware This property holds for HMPD design withrelatively large FOVs While optical correction is cur-rently readily available for various hardware solutionseliminating the need for distortion correction not onlyminimizes the cost of the system but also eliminatesprocessing Therefore there are no additional systemdelays Furthermore distortion-free optics avoids deteri-oration in image quality that can become more pro-nounced with increasing amounts of required correc-tion The fact that the microdisplay pixels remainsquare regardless of the level of predistortion compen-

Figure 3 Demonstration of occlusion capability of virtual objects by

real objects User reaches out to the trachea in the ARC display and

occludes part of the object with his hand A grayscale image is shown

here however the display is full color (Mandible and Trachea Visible

Human Dataset 3D Models Courtesy of Celina Imielinska Columbia

University)

534 PRESENCE VOLUME 14 NUMBER 5

sation of the rendered image may cause visual discom-fort if the pixels are resolvable as commonly encoun-tered in HMDs Thus distortion-free optics even todaywith high speed processing presents key advantages tomore natural perception in generally speaking HMDs

32 The Retro-Reflective ScreenAny Shape in Any Location

A key property of retro-reflective material is thatrays hitting the surface at any angle are reflected backonto the same incident optical path Thus theoreticallythe perception of the image is independent of the shapeand location of the screen made of such material Suchtechnology can thus be implemented with curved ortilted screens with no apparent distortion In practicedepending on the specifics of the retro-reflective mate-rial some dependence on the shape may be observedoutside a range of bending of the material Also imagequality (ie the sharpness of the image) is in practicelimited by the optical diffraction of the material (Mar-tins amp Rolland 2003) whose impact can be highly sig-nificant for large discrepancies in the location of the ma-terial with respect to the optical image projected by the

optics Thus ideally to minimize the effect of diffrac-tion the material must be placed in close proximity tothe images focused by the projector Furthermore de-pending on the specific properties of the material theamount of blurring quantified as the width of the pointspread function (PSF) may be more or less pronouncedfor the same discrepancy in the location of the projectedimages and the material as now quantified

An analysis of diffraction was conducted on two typesof material a micro-beaded type of material (ieScotchlite 3M Fabric Silver-beaded) and a microcorner-cube type of material (ie Scotchlite 3M FilmSilver-cubed) both shown on a microscopic scale inFigure 4b1 The analysis shown in Figure 4a quantifiesthe spread of light after retro-reflection due to diffrac-tion Measurements made in the laboratory and alsoreported in Figure 4b indicate a good correlation in theresults with the mathematical predictions From thisanalysis it is shown that the micro corner-cube materialwill be superior in maximizing the light throughput andminimizing loss in resolution due to diffraction In a

1 Scotchlite is a registered trademark of the 3M company

Figure 4 (a) Theoretical modeling of the point spread function (PSF) for two kinds of retro-reflective materials (b) 3D

(xyI) measured point spread functions a microscopic view of the two different types of materials is shown above the

PSFs as well as their basic 3D microstructure

Rolland et al 535

companion paper an analysis of human performance inresolving small details with both types of materials isreported (Fidopiastis Furhman Meyer amp Rolland2005)

Finally it is important to note that the FOV of theHMPD is invariant with the location of the material inthe environment and the apparent size of a 3D objectobtained from generating stereoscopic images will alsobe invariant with the location of the user with respect tothe material as long as the head of the user is tracked inposition and orientation as in any AR application

33 Optical Design of HMPDs

The optical design of any HMD including theHMPD is highly driven by the choice of the microdis-plays specifically their size resolution and means ofself-emitting light or requiring illumination optics Thesmaller the microdisplays the higher the required powerof the optics to achieve a given FOV and thus thehigher the number of elements required

The microdisplays and associated electronics firstavailable for this project were 135 in diagonal back-lighting color AM-LCDs (active matrix liquid crystaldisplays) with (640 3) 480 pixels and 42-m pixelsize While higher resolution would be preferred theavailability in size and color of this microdisplay weredeterminant for the choice made A 52deg FOV optics pereye with a 35-mm focal length optics was designed inorder to maintain a visual resolution of less than about 4arc minutes This choice resulted in a predicted 41 arcminutes per pixel in angular resolution horizontally andvertically (Hua Ha amp Rolland 2003) In spite of thelower resolution imposed by the microdisplay largerFOVs optics were explored For example a 70deg FOVoptics was investigated (Ha amp Rolland 2004) togetherwith a discussion of the properties of such optics (Rol-land Biocca et al 2004) Both designs were based onan ultralightweight four-element compact projectionoptics using a combination of DOEs plastic compo-nents and aspheric surfaces While plastic componentsare ideal to design an ultralight system its combinationwith glass components and DOEs also enables higher

image quality The total weight of each lens assemblywas only 6 g The mechanical dimensions of the 52degand 70deg FOVs optics were 20-mm in length by 18-mmin diameter and 15-mm in length by 134 mm in diame-ter respectively For both designs an analysis of perfor-mance determined that the polychromatic modulationtransfer functions displayed more than 40 contrast at25 line-pairsmm for a full size pupil of 12 mm Thedistortion was constrained to be less than 25 acrossthe overall visual fields in both cases

Finally whether the microdisplay is self-emitting orrequires illumination optics may impose additional con-straints on the design and compactness If the microdis-play acts as a mirror such as liquid crystal on silicon(LCOS) displays (Huang et al 2002) the projectionoptics diameter will be larger than the microdisplay toavoid vignetting the footprint of the telecentric lightbeam reflected off the LCOS as it passes through thelens Telecentric means that the central ray of the coneof light from each point in the field of view is parallel tothe optical axis before the LCOS display Furthermoresuch microdisplays require illumination optics and thusthey require additional optical and opto-mechanicalcomponents that will add weight complexity and costto the overall system Advantages of LCOS displays to-day however are their physical size (ie 1 in diago-nal) their brightness and their resolution

For each HMPD design (and this also applies toHMD design in general) the choice of the microdisplayis critical to the success of a final prototype or productand as such a choice must be application driven if theprototype or product are targeted at a real-world appli-cation

4 Design of HMPD Technologies forSpecific Applications The TeleportalHMDP (T-HMPD) and the Mobile HMPD(M-HMPD)

Nonverbal cues regarding what others are thinkingand where they are looking are key sources of informa-tion during real-time collaboration among workmates

536 PRESENCE VOLUME 14 NUMBER 5

especially when movements must be coordinated as incollaborative object manipulation The direction ofanotherrsquos gaze is a key source of information as to theotherrsquos visual attention For example it helps disambig-uate spatially ambiguous but common everyday phrasessuch as ldquoget the tool over thererdquo Facial expressions sup-plemented by language provide teammates with insightto the intentions moods and meanings communicatedby others

Video conferencing systems provide some informa-tion on visual expressions but fail to provide accuratecues of spatial attention and are poor at supportingphysical collaboration because collaborators lack a com-mon action space VR systems using HMPD displayscan better create a common action space and supportnaturalistic hand manipulation of 3D virtual objectsduring immersive collaboration but facial expressionsare partially occluded by the HMPD for local partici-pants and further masked for remote participants Akey challenge in immersive collaborative systems ishow to add the important information channels offacial expressions and visual attention into a distrib-uted AR interface An emerging version of theHMPD design tailored for face-to-face interactionand referred to as the teleportal HMPD (T-HMPD)will be described in Section 41

Another challenge in creating distributed collabora-tive environments is how to create mobile systems basedon HMPD technology given that collaboration mayalso take place as we navigate through a real environ-ment such as a museum or a city In such cases it is notpossible to position retro-reflective material strategicallyin the environment however it would be advantageousbased on the ultralightweight of the optics of HMPDsto expand the technology to mobile systems A mobileHMPD (M-HMPD) will be described in Section 42

41 The Teleportal HMPD (T-HMPD)

The T-HMPD integrates optical image process-ing and display technologies to capture transport andreconstruct an AR 3D model of the head and facial ex-

pression of a remote collaborator networked to a localpartner The T-HMPD is a hybrid optical and videosee-through HMD composed of an HMPD combinedwith a pair of lipstick video cameras and two miniaturemirrors mounted to the side and slightly forward of theface of the user as shown schematically in Figure 5(Biocca amp Rolland 2004) The configuration cap-tures stereoscopic video of the userrsquos face includingboth sides of the face without occlusion with mini-mal interference within the userrsquos visual field and onlyminimal occlusion of the face as viewed by otherphysical participants at the local site Unlike room-basedvideo the head-worn camera and mirror system cap-tures the full face of users no matter where they arelooking and regardless of their locations as long as thecameras are connected

Figure 6ab show the left and right views respec-tively of the lipstick cameras through the miniaturemirrors (ie 1 in in diameter in this case) in one of ourfirst tests The radius of curvature of the convex surfaceleading to the face capture shown was selected to be65-mm from applying basic optical imaging equationsbetween a small 4 mm focal length lipstick camera andthe face In the first implementation the lipstick videocameras were Sony Electronics DXCLS11 Adjustablerods were designed to mount the two mirrors in orderto experiment with various configurations of mirrorscamera lenses and distances from the face and the two

Figure 5 General layout of the minimally intrusive stereoscopic face

capture system

Rolland et al 537

mirrors Optimization of the optical face capture systemis under optimization for robustness minimization ofshadows on the face and placement of the mirror sideof the beam splitter to capture the face of any partici-

pant with no interference from the projector light or thebeam splitter Image processing algorithms under de-velopment at the MIND Lab (Media Interface and Net-work Design Lab) in collaboration with the ODA Lab

Figure 6 Face images captured using the FCS (a) left image and (b) right image (c) Virtual frontal view generated from two side view

cameras mounted in this case to a table as opposed to the head for feasibility study (Courtesy frontal view from C Reddy and G Stockman

Michigan State University)

538 PRESENCE VOLUME 14 NUMBER 5

(Optical Diagnostics and Applications Lab) unwrap thedistorted images from the cameras and produce a com-posite video texture from the two images (Reddy 2003Reddy Stockman Rolland amp Biocca 2004) A feasibil-ity of creating a frontal virtual view from two lipstickcameras mounted to the side and slightly forward of theface is shown in Figure 6c where the cameras in thiscase were mounted to a table The composite stereovideo texture can be sent directly via high bandwidthconnection or mapped to a 3D mesh of the userrsquos faceThe 3D face of the user can be projected in the localspace as a 3D AR object in the virtual environmentand placed in the analogous spatial relation to othersand virtual objects using the tracker data of the re-mote site Alternatively the teleportal 3D face-to-facemodel can be projected onto a head model covered inretro-reflective material or as a 3D video on the wallof an office or conference room as in traditional tele-conferencing systems As the algorithm for stereo facecapture and reconstruction matures we are preparingto test the algorithm in various presentation scenariosincluding a retro-reflective ball and head model andother embodiments of the remote others will be cre-ated In optimizing the presentation of informationto create the maximum sense of presence we mayinvestigate how to best display the remote faces Forexample we may employ a retro-reflective tabletopwhere 3D scientific visualization may be shared or aretro-reflective wall that would open a common win-dow to both distributed visualization and social envi-ronments

For local participants wearing HMPDs a participantwould see the eyes of the other participant illuminatedhowever the projector is too weak to shine light in theeyes of a side participant The illuminated eyes may beturned off automatically when two local participantsspeak face-to-face to each other (while their face is stillbeing captured) and the projector can be automaticallyturned back on when facing any retro-reflective mate-rial This feature has not yet been implemented Thereis no such issue for remote participants given how theface is captured

42 Mobile Head-Mounted ProjectionDisplay (M-HMPD)

In order to expand the use of HMPD to appli-cations that may not be able to allow placing of retro-reflective material in the environment a novel displaybased on the HMPD was recently conceived (RollandMartins amp Ha 2003) A schematic of the display isshown in Figure 7 The main novelty of the display liesin removing the fabric from the environment and solelyintegrating the retro-reflective material within theHMPD for imaging using additional optics between thebeam splitter and the material to optically image thematerial at a remote distance from the user A preferredlocation for the material is in coincidence with the mon-ocular virtual images of the HMPD to minimize theeffects of diffraction imposed by the microstructures ofthe optical material Without the additional optics weestimated that in the best case scenario visual acuitywould have been limited to about 10 minutes of arceven with a finer resolution of the microdisplay whichwould be visually unacceptable Thus when the materialwithin the HMPD is not used simply for increased illu-mination the imaging optics between the integratedmaterial and the beam splitter is absolutely required inorder to provide adequate overall resolution of theviewed images The additional optics is illustrated in

Figure 7 Conceptual design of a mobile HMPD (M-HMPD)

Rolland et al 539

Figure 7 with a Fresnel lens for low weight and com-pactness

Because the image formed by the projection optics isminimized at the location of the retro-reflective materialplaced within the M-HMPD a difference with the basicHMPD is that smaller microstructures are required (ieon the order of 10 m instead of 100 m) The de-tailed optical design of the lens is similar to that of thefirst ODA Lab HMPD optics except that the microdis-play chosen is a 06 in diagonal OLED and higher res-olution 800 600 pixels Details of the optical designwere reported elsewhere (Martins Ha amp Rolland2003) A challenge associated with the development ofthe M-HMPD is the design and fabrication of about10 m scale microstructure retro-reflective materialSuch microstructures are not commercially available andcustom-design materials are being investigated

Because of its stand-alone capability the M-HMPDextends the use of HMPDs to clinical guided surgerymedical simulation and training wearable computersmobile secure displays and distributed collaborative dis-plays as well as outdoor augmented see-through virtualenvironments The visual results of the M-HMPD com-pare in essence to that of eyepiece-based see-throughAR systems currently available The difference lieshowever in the higher image quality (ie lower blur) adistortion-free image and a more compact optics

5 A Review of CollaborativeApplications Medical Applicationsand Infospaces

The HMPD facilitates the development of collab-orative environments that allow seamless transitionsthrough different levels of immersion from AR to a fullvirtual reality experience (Milgram amp Kishino 1994Davis et al 2003) In the artificial reality center (ARC)users can navigate between various levels of immersionthat occur on the basis of where users position them-selves with respect to the retro-reflective material TheARC presented at ISMAR 2002 (Hamza-Lup et al2002) together with a remote collaboration applicationbuilt on top of DARE (distributed augmented realityenvironment) (Hamza-Lup Davis Hughes amp Rolland2002) consists of a generally curved or shaped retro-reflective wall an HMPD with miniature optics a com-mercially available optical tracking system and Linux-based PCs The ARCs may take different shapes andsizes and are importantly quickly deployable either in-doors or outdoors Since the year 2001 we built twodeployable (10 minute setup) displays 10-ft wide by7-ft high as well as a deployable (30 minute setup)15-ft diameter center networked to some of the otherARCs as shown in Figure 8 However all displays aim toprovide multiuser capability including remotely located

Figure 8 Networked open environments with artificial reality centers (NOEs ARCs)

540 PRESENCE VOLUME 14 NUMBER 5

users (Hamza-Lup amp Rolland 2004b) as well as 3Dmultisensory visualization including 3D sound hapticand smell (Rolland Stanney et al 2004) The userrsquosmotion becomes the computer interface device in con-trast to systems that may resort to various display de-vices (Billinghurst Kato amp Poupyrev 2001)

Variants of the HMPD optics targeted at specificapplications such as 3D medical visualization and dis-tributed AR medical training tools (Rolland Hamza-Lup et al 2003 Hamza-Lup Rolland amp Hughes2005) embedded training display technology for theArmyrsquos future combat vehicles (Rodrigez Foglia ampRolland 2003) 3D manufacturing design collabora-tion (Rolland Biocca et al 2004) have been devel-oped in our laboratory Also the HMPD in its origi-nal prototype form with a 52deg ultralightweight opticslies at the core of the Aztec Explorer application de-veloped at the 3DVIS Lab that investigates variousinteraction schemes within SCAPE (stereoscopic col-laboration in augmented and projective environ-ments) that consists of an integration of the HMPDtogether with the HiBall3000 head tracker by 3rdTech (www3rdtechcom) a SCAPE API developedin the 3DVIS Lab the CAVERN G2 API networkinglibrary (wwwopenchanelsoftwareorg) and a tracked5DT Dataglove (Hua Brown amp Gao 2004)

In summary the HMPD technology we have devel-oped currently provides a fine balance of affordabilityand unique capabilities such as (1) spanning the virtual-ity continuum allowing both full immersion and mixedreality which may open a set of new capabilities acrossvarious applications (2) enabling teleportal capabilitywith face-to-face interaction (3) creating ultralight-weight wide FOVs mobile and secure displays (4) creat-ing small low cost desktop technology or on largerscale quickly deployable 3D visualization centers or dis-plays

51 Medical Visualization andAugmented Reality Medical TrainingTools

In the ARCs multiple users may interact on thevisualization of 3D medical models as shown in Figure9 or practice procedures on AR human patient simula-tors (HPS) with a teaching module referred to as theultimate intubation head (UIH) as shown in Figure 10which we shall now further detail Because as detailedearlier in the paper the light projected from one userreturns only to that user there is no cross talk betweenusers and thus multiple users can coexist with no over-lapping graphics in the ARCs

Figure 9 Concept of multiusers interacting in the ARC Figure 10 Illustration of the endotracheal intubation training tool

Rolland et al 541

The UIH is a training tool in development in theODA Lab for endotracheal intubation (ETI) based onHMPD and ARC technology (Rolland Hamza-Lup etal 2003) (Hamza-Lup Rolland amp Hughes 2005) Itis aimed at medical students residents physician assis-tants pre-hospital care personnel nurse-anesthetistsexperienced physicians and any medical personnel whoneed to perform this common but critical procedure ina safe and rapid sequence

The system trains a wide range of clinicians in safelysecuring the airway during cardiopulmonary resuscita-tion as ensuring immediate ventilation andor oxygen-ation is critical for a number of reasons Firstly ETIwhich consists of inserting an endotracheal tubethrough the mouth into the trachea and then sealingthe trachea so that all air passes through the tube is alifesaving procedure Secondly the need for ETI canoccur in many places in and out of the hospital Per-haps the most important reason for training clinicians inETI however is the inherent difficulty associated withthe procedure (American Heart Association 1992Walls Barton amp McAfee 1999)

Current teaching methods lack flexibility in more

than one sense The most widely used model is a plasticor latex mannequin commonly used to teach advancedcardiac life support (ACLS) techniques including air-way management The neck and oropharynx are usuallydifficult to manipulate without inadvertently ldquobreakingrdquothe modelrsquos teeth or ldquodislocatingrdquo the cervical spinebecause of the awkward hand motions required A rela-tively recent development is the HPS a mannequin-based simulator The HPS is similar to the existingACLS models but the neck and airway are often moreflexible and lifelike and can be made to deform and re-lax to simulate real scenarios The HPS can simulateheart and lung sounds and provide palpable pulses aswell as realistic chest movement The simulator is inter-active but requires real-time programming and feed-back from an instructor (Murray amp Schneider 1997)Utilizing a HPS combined with 3D AR visualization ofthe airway anatomy and the endotracheal tube para-medics will be able to obtain a visual and tactile sense ofproper ETI The UIH will allow paramedics to practicetheir skills and provide them with the visual feedbackthey could not obtain otherwise

Intubation on the HPS is shown in Figure 11a The

Figure 11 (a) A user training on an intubation procedure (b) Lung model superimposed on the HPS using AR (3D model of the Visible

Human Dataset lung Courtesy of Celina Imielinska Columbia University)

542 PRESENCE VOLUME 14 NUMBER 5

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 7: Development of Head-Mounted - University of Central Florida

at the corner of the FOV) than obtained with conven-tional eyepiece-based optical see-through HMDs for anequivalent weight At the foundation of this capability isthe location of the pupil both within the projection op-tics and after the beam splitter

In conventional HMDs using eyepiece optics onemay distinguish between pupil forming and non-pupilforming systems By pupil forming we mean that a dia-phragm (ie an aperture stop more generally called apupil that limits the light passing through the system) islocated within the optical system and is being reimagedat the eye location The eye location is referred to as theeyepoint because in computer graphics a pinhole modelis used to generate the stereoscopic image pairs In thecase of a pupil forming eyepiece an aperture stop is thuslocated within the optics however the final image ofthis stop after the eyepiece and beam splitter is real andlocated outside the optics at the chosen eyepoint forthe generation of the stereoscopic images (Rolland Ha

amp Fidopiastis 2004) The main drawback of an externalexit pupil is that as the FOV increases the optics in-creases in diameter and thus the weight increases as thecube of the diameter For the non-pupil forming eye-piece the pupil (ie the image of the eye iris throughthe cornea) of the eye itself constitutes the limiting ap-erture stop of the eyepiece However the same limita-tion occurs regarding the trade-off in FOV versusweight An advantage of non-pupil forming opticshowever is that the user may benefit from a larger eyebox where eye movements may more freely occur with-out vignetting of the image (ie vignetting refers topartial light loss or full loss of the image)

Projection optics is intrinsically a pupil forming opti-cal system In the case of projection optics an aperturestop is located within the projection lens by design andthe image of this aperture stop known as the exit pupilis virtual (ie it is located to the left of the last surfaceof the projection optics shown schematically in Figure2b) However given the orientation of the beam splitterat 90deg from that used with eyepiece optics the final im-age of the exit pupil after the beam splitter is coincidentwith the eyepoint of the eye as required In the case of avirtual pupil however as the FOV increases the opticssize remains almost invariant Furthermore in the caseof projection optics where the final pupil is virtual it isquite straightforward to design the optics with the pupillocated at or close to the nodal points of the lens (iemathematical first order constructs with unit angularmagnification) In such a case there will be little or nodistortion Correcting for distortion eliminates the needfor real-time distortion correction with software orhardware This property holds for HMPD design withrelatively large FOVs While optical correction is cur-rently readily available for various hardware solutionseliminating the need for distortion correction not onlyminimizes the cost of the system but also eliminatesprocessing Therefore there are no additional systemdelays Furthermore distortion-free optics avoids deteri-oration in image quality that can become more pro-nounced with increasing amounts of required correc-tion The fact that the microdisplay pixels remainsquare regardless of the level of predistortion compen-

Figure 3 Demonstration of occlusion capability of virtual objects by

real objects User reaches out to the trachea in the ARC display and

occludes part of the object with his hand A grayscale image is shown

here however the display is full color (Mandible and Trachea Visible

Human Dataset 3D Models Courtesy of Celina Imielinska Columbia

University)

534 PRESENCE VOLUME 14 NUMBER 5

sation of the rendered image may cause visual discom-fort if the pixels are resolvable as commonly encoun-tered in HMDs Thus distortion-free optics even todaywith high speed processing presents key advantages tomore natural perception in generally speaking HMDs

32 The Retro-Reflective ScreenAny Shape in Any Location

A key property of retro-reflective material is thatrays hitting the surface at any angle are reflected backonto the same incident optical path Thus theoreticallythe perception of the image is independent of the shapeand location of the screen made of such material Suchtechnology can thus be implemented with curved ortilted screens with no apparent distortion In practicedepending on the specifics of the retro-reflective mate-rial some dependence on the shape may be observedoutside a range of bending of the material Also imagequality (ie the sharpness of the image) is in practicelimited by the optical diffraction of the material (Mar-tins amp Rolland 2003) whose impact can be highly sig-nificant for large discrepancies in the location of the ma-terial with respect to the optical image projected by the

optics Thus ideally to minimize the effect of diffrac-tion the material must be placed in close proximity tothe images focused by the projector Furthermore de-pending on the specific properties of the material theamount of blurring quantified as the width of the pointspread function (PSF) may be more or less pronouncedfor the same discrepancy in the location of the projectedimages and the material as now quantified

An analysis of diffraction was conducted on two typesof material a micro-beaded type of material (ieScotchlite 3M Fabric Silver-beaded) and a microcorner-cube type of material (ie Scotchlite 3M FilmSilver-cubed) both shown on a microscopic scale inFigure 4b1 The analysis shown in Figure 4a quantifiesthe spread of light after retro-reflection due to diffrac-tion Measurements made in the laboratory and alsoreported in Figure 4b indicate a good correlation in theresults with the mathematical predictions From thisanalysis it is shown that the micro corner-cube materialwill be superior in maximizing the light throughput andminimizing loss in resolution due to diffraction In a

1 Scotchlite is a registered trademark of the 3M company

Figure 4 (a) Theoretical modeling of the point spread function (PSF) for two kinds of retro-reflective materials (b) 3D

(xyI) measured point spread functions a microscopic view of the two different types of materials is shown above the

PSFs as well as their basic 3D microstructure

Rolland et al 535

companion paper an analysis of human performance inresolving small details with both types of materials isreported (Fidopiastis Furhman Meyer amp Rolland2005)

Finally it is important to note that the FOV of theHMPD is invariant with the location of the material inthe environment and the apparent size of a 3D objectobtained from generating stereoscopic images will alsobe invariant with the location of the user with respect tothe material as long as the head of the user is tracked inposition and orientation as in any AR application

33 Optical Design of HMPDs

The optical design of any HMD including theHMPD is highly driven by the choice of the microdis-plays specifically their size resolution and means ofself-emitting light or requiring illumination optics Thesmaller the microdisplays the higher the required powerof the optics to achieve a given FOV and thus thehigher the number of elements required

The microdisplays and associated electronics firstavailable for this project were 135 in diagonal back-lighting color AM-LCDs (active matrix liquid crystaldisplays) with (640 3) 480 pixels and 42-m pixelsize While higher resolution would be preferred theavailability in size and color of this microdisplay weredeterminant for the choice made A 52deg FOV optics pereye with a 35-mm focal length optics was designed inorder to maintain a visual resolution of less than about 4arc minutes This choice resulted in a predicted 41 arcminutes per pixel in angular resolution horizontally andvertically (Hua Ha amp Rolland 2003) In spite of thelower resolution imposed by the microdisplay largerFOVs optics were explored For example a 70deg FOVoptics was investigated (Ha amp Rolland 2004) togetherwith a discussion of the properties of such optics (Rol-land Biocca et al 2004) Both designs were based onan ultralightweight four-element compact projectionoptics using a combination of DOEs plastic compo-nents and aspheric surfaces While plastic componentsare ideal to design an ultralight system its combinationwith glass components and DOEs also enables higher

image quality The total weight of each lens assemblywas only 6 g The mechanical dimensions of the 52degand 70deg FOVs optics were 20-mm in length by 18-mmin diameter and 15-mm in length by 134 mm in diame-ter respectively For both designs an analysis of perfor-mance determined that the polychromatic modulationtransfer functions displayed more than 40 contrast at25 line-pairsmm for a full size pupil of 12 mm Thedistortion was constrained to be less than 25 acrossthe overall visual fields in both cases

Finally whether the microdisplay is self-emitting orrequires illumination optics may impose additional con-straints on the design and compactness If the microdis-play acts as a mirror such as liquid crystal on silicon(LCOS) displays (Huang et al 2002) the projectionoptics diameter will be larger than the microdisplay toavoid vignetting the footprint of the telecentric lightbeam reflected off the LCOS as it passes through thelens Telecentric means that the central ray of the coneof light from each point in the field of view is parallel tothe optical axis before the LCOS display Furthermoresuch microdisplays require illumination optics and thusthey require additional optical and opto-mechanicalcomponents that will add weight complexity and costto the overall system Advantages of LCOS displays to-day however are their physical size (ie 1 in diago-nal) their brightness and their resolution

For each HMPD design (and this also applies toHMD design in general) the choice of the microdisplayis critical to the success of a final prototype or productand as such a choice must be application driven if theprototype or product are targeted at a real-world appli-cation

4 Design of HMPD Technologies forSpecific Applications The TeleportalHMDP (T-HMPD) and the Mobile HMPD(M-HMPD)

Nonverbal cues regarding what others are thinkingand where they are looking are key sources of informa-tion during real-time collaboration among workmates

536 PRESENCE VOLUME 14 NUMBER 5

especially when movements must be coordinated as incollaborative object manipulation The direction ofanotherrsquos gaze is a key source of information as to theotherrsquos visual attention For example it helps disambig-uate spatially ambiguous but common everyday phrasessuch as ldquoget the tool over thererdquo Facial expressions sup-plemented by language provide teammates with insightto the intentions moods and meanings communicatedby others

Video conferencing systems provide some informa-tion on visual expressions but fail to provide accuratecues of spatial attention and are poor at supportingphysical collaboration because collaborators lack a com-mon action space VR systems using HMPD displayscan better create a common action space and supportnaturalistic hand manipulation of 3D virtual objectsduring immersive collaboration but facial expressionsare partially occluded by the HMPD for local partici-pants and further masked for remote participants Akey challenge in immersive collaborative systems ishow to add the important information channels offacial expressions and visual attention into a distrib-uted AR interface An emerging version of theHMPD design tailored for face-to-face interactionand referred to as the teleportal HMPD (T-HMPD)will be described in Section 41

Another challenge in creating distributed collabora-tive environments is how to create mobile systems basedon HMPD technology given that collaboration mayalso take place as we navigate through a real environ-ment such as a museum or a city In such cases it is notpossible to position retro-reflective material strategicallyin the environment however it would be advantageousbased on the ultralightweight of the optics of HMPDsto expand the technology to mobile systems A mobileHMPD (M-HMPD) will be described in Section 42

41 The Teleportal HMPD (T-HMPD)

The T-HMPD integrates optical image process-ing and display technologies to capture transport andreconstruct an AR 3D model of the head and facial ex-

pression of a remote collaborator networked to a localpartner The T-HMPD is a hybrid optical and videosee-through HMD composed of an HMPD combinedwith a pair of lipstick video cameras and two miniaturemirrors mounted to the side and slightly forward of theface of the user as shown schematically in Figure 5(Biocca amp Rolland 2004) The configuration cap-tures stereoscopic video of the userrsquos face includingboth sides of the face without occlusion with mini-mal interference within the userrsquos visual field and onlyminimal occlusion of the face as viewed by otherphysical participants at the local site Unlike room-basedvideo the head-worn camera and mirror system cap-tures the full face of users no matter where they arelooking and regardless of their locations as long as thecameras are connected

Figure 6ab show the left and right views respec-tively of the lipstick cameras through the miniaturemirrors (ie 1 in in diameter in this case) in one of ourfirst tests The radius of curvature of the convex surfaceleading to the face capture shown was selected to be65-mm from applying basic optical imaging equationsbetween a small 4 mm focal length lipstick camera andthe face In the first implementation the lipstick videocameras were Sony Electronics DXCLS11 Adjustablerods were designed to mount the two mirrors in orderto experiment with various configurations of mirrorscamera lenses and distances from the face and the two

Figure 5 General layout of the minimally intrusive stereoscopic face

capture system

Rolland et al 537

mirrors Optimization of the optical face capture systemis under optimization for robustness minimization ofshadows on the face and placement of the mirror sideof the beam splitter to capture the face of any partici-

pant with no interference from the projector light or thebeam splitter Image processing algorithms under de-velopment at the MIND Lab (Media Interface and Net-work Design Lab) in collaboration with the ODA Lab

Figure 6 Face images captured using the FCS (a) left image and (b) right image (c) Virtual frontal view generated from two side view

cameras mounted in this case to a table as opposed to the head for feasibility study (Courtesy frontal view from C Reddy and G Stockman

Michigan State University)

538 PRESENCE VOLUME 14 NUMBER 5

(Optical Diagnostics and Applications Lab) unwrap thedistorted images from the cameras and produce a com-posite video texture from the two images (Reddy 2003Reddy Stockman Rolland amp Biocca 2004) A feasibil-ity of creating a frontal virtual view from two lipstickcameras mounted to the side and slightly forward of theface is shown in Figure 6c where the cameras in thiscase were mounted to a table The composite stereovideo texture can be sent directly via high bandwidthconnection or mapped to a 3D mesh of the userrsquos faceThe 3D face of the user can be projected in the localspace as a 3D AR object in the virtual environmentand placed in the analogous spatial relation to othersand virtual objects using the tracker data of the re-mote site Alternatively the teleportal 3D face-to-facemodel can be projected onto a head model covered inretro-reflective material or as a 3D video on the wallof an office or conference room as in traditional tele-conferencing systems As the algorithm for stereo facecapture and reconstruction matures we are preparingto test the algorithm in various presentation scenariosincluding a retro-reflective ball and head model andother embodiments of the remote others will be cre-ated In optimizing the presentation of informationto create the maximum sense of presence we mayinvestigate how to best display the remote faces Forexample we may employ a retro-reflective tabletopwhere 3D scientific visualization may be shared or aretro-reflective wall that would open a common win-dow to both distributed visualization and social envi-ronments

For local participants wearing HMPDs a participantwould see the eyes of the other participant illuminatedhowever the projector is too weak to shine light in theeyes of a side participant The illuminated eyes may beturned off automatically when two local participantsspeak face-to-face to each other (while their face is stillbeing captured) and the projector can be automaticallyturned back on when facing any retro-reflective mate-rial This feature has not yet been implemented Thereis no such issue for remote participants given how theface is captured

42 Mobile Head-Mounted ProjectionDisplay (M-HMPD)

In order to expand the use of HMPD to appli-cations that may not be able to allow placing of retro-reflective material in the environment a novel displaybased on the HMPD was recently conceived (RollandMartins amp Ha 2003) A schematic of the display isshown in Figure 7 The main novelty of the display liesin removing the fabric from the environment and solelyintegrating the retro-reflective material within theHMPD for imaging using additional optics between thebeam splitter and the material to optically image thematerial at a remote distance from the user A preferredlocation for the material is in coincidence with the mon-ocular virtual images of the HMPD to minimize theeffects of diffraction imposed by the microstructures ofthe optical material Without the additional optics weestimated that in the best case scenario visual acuitywould have been limited to about 10 minutes of arceven with a finer resolution of the microdisplay whichwould be visually unacceptable Thus when the materialwithin the HMPD is not used simply for increased illu-mination the imaging optics between the integratedmaterial and the beam splitter is absolutely required inorder to provide adequate overall resolution of theviewed images The additional optics is illustrated in

Figure 7 Conceptual design of a mobile HMPD (M-HMPD)

Rolland et al 539

Figure 7 with a Fresnel lens for low weight and com-pactness

Because the image formed by the projection optics isminimized at the location of the retro-reflective materialplaced within the M-HMPD a difference with the basicHMPD is that smaller microstructures are required (ieon the order of 10 m instead of 100 m) The de-tailed optical design of the lens is similar to that of thefirst ODA Lab HMPD optics except that the microdis-play chosen is a 06 in diagonal OLED and higher res-olution 800 600 pixels Details of the optical designwere reported elsewhere (Martins Ha amp Rolland2003) A challenge associated with the development ofthe M-HMPD is the design and fabrication of about10 m scale microstructure retro-reflective materialSuch microstructures are not commercially available andcustom-design materials are being investigated

Because of its stand-alone capability the M-HMPDextends the use of HMPDs to clinical guided surgerymedical simulation and training wearable computersmobile secure displays and distributed collaborative dis-plays as well as outdoor augmented see-through virtualenvironments The visual results of the M-HMPD com-pare in essence to that of eyepiece-based see-throughAR systems currently available The difference lieshowever in the higher image quality (ie lower blur) adistortion-free image and a more compact optics

5 A Review of CollaborativeApplications Medical Applicationsand Infospaces

The HMPD facilitates the development of collab-orative environments that allow seamless transitionsthrough different levels of immersion from AR to a fullvirtual reality experience (Milgram amp Kishino 1994Davis et al 2003) In the artificial reality center (ARC)users can navigate between various levels of immersionthat occur on the basis of where users position them-selves with respect to the retro-reflective material TheARC presented at ISMAR 2002 (Hamza-Lup et al2002) together with a remote collaboration applicationbuilt on top of DARE (distributed augmented realityenvironment) (Hamza-Lup Davis Hughes amp Rolland2002) consists of a generally curved or shaped retro-reflective wall an HMPD with miniature optics a com-mercially available optical tracking system and Linux-based PCs The ARCs may take different shapes andsizes and are importantly quickly deployable either in-doors or outdoors Since the year 2001 we built twodeployable (10 minute setup) displays 10-ft wide by7-ft high as well as a deployable (30 minute setup)15-ft diameter center networked to some of the otherARCs as shown in Figure 8 However all displays aim toprovide multiuser capability including remotely located

Figure 8 Networked open environments with artificial reality centers (NOEs ARCs)

540 PRESENCE VOLUME 14 NUMBER 5

users (Hamza-Lup amp Rolland 2004b) as well as 3Dmultisensory visualization including 3D sound hapticand smell (Rolland Stanney et al 2004) The userrsquosmotion becomes the computer interface device in con-trast to systems that may resort to various display de-vices (Billinghurst Kato amp Poupyrev 2001)

Variants of the HMPD optics targeted at specificapplications such as 3D medical visualization and dis-tributed AR medical training tools (Rolland Hamza-Lup et al 2003 Hamza-Lup Rolland amp Hughes2005) embedded training display technology for theArmyrsquos future combat vehicles (Rodrigez Foglia ampRolland 2003) 3D manufacturing design collabora-tion (Rolland Biocca et al 2004) have been devel-oped in our laboratory Also the HMPD in its origi-nal prototype form with a 52deg ultralightweight opticslies at the core of the Aztec Explorer application de-veloped at the 3DVIS Lab that investigates variousinteraction schemes within SCAPE (stereoscopic col-laboration in augmented and projective environ-ments) that consists of an integration of the HMPDtogether with the HiBall3000 head tracker by 3rdTech (www3rdtechcom) a SCAPE API developedin the 3DVIS Lab the CAVERN G2 API networkinglibrary (wwwopenchanelsoftwareorg) and a tracked5DT Dataglove (Hua Brown amp Gao 2004)

In summary the HMPD technology we have devel-oped currently provides a fine balance of affordabilityand unique capabilities such as (1) spanning the virtual-ity continuum allowing both full immersion and mixedreality which may open a set of new capabilities acrossvarious applications (2) enabling teleportal capabilitywith face-to-face interaction (3) creating ultralight-weight wide FOVs mobile and secure displays (4) creat-ing small low cost desktop technology or on largerscale quickly deployable 3D visualization centers or dis-plays

51 Medical Visualization andAugmented Reality Medical TrainingTools

In the ARCs multiple users may interact on thevisualization of 3D medical models as shown in Figure9 or practice procedures on AR human patient simula-tors (HPS) with a teaching module referred to as theultimate intubation head (UIH) as shown in Figure 10which we shall now further detail Because as detailedearlier in the paper the light projected from one userreturns only to that user there is no cross talk betweenusers and thus multiple users can coexist with no over-lapping graphics in the ARCs

Figure 9 Concept of multiusers interacting in the ARC Figure 10 Illustration of the endotracheal intubation training tool

Rolland et al 541

The UIH is a training tool in development in theODA Lab for endotracheal intubation (ETI) based onHMPD and ARC technology (Rolland Hamza-Lup etal 2003) (Hamza-Lup Rolland amp Hughes 2005) Itis aimed at medical students residents physician assis-tants pre-hospital care personnel nurse-anesthetistsexperienced physicians and any medical personnel whoneed to perform this common but critical procedure ina safe and rapid sequence

The system trains a wide range of clinicians in safelysecuring the airway during cardiopulmonary resuscita-tion as ensuring immediate ventilation andor oxygen-ation is critical for a number of reasons Firstly ETIwhich consists of inserting an endotracheal tubethrough the mouth into the trachea and then sealingthe trachea so that all air passes through the tube is alifesaving procedure Secondly the need for ETI canoccur in many places in and out of the hospital Per-haps the most important reason for training clinicians inETI however is the inherent difficulty associated withthe procedure (American Heart Association 1992Walls Barton amp McAfee 1999)

Current teaching methods lack flexibility in more

than one sense The most widely used model is a plasticor latex mannequin commonly used to teach advancedcardiac life support (ACLS) techniques including air-way management The neck and oropharynx are usuallydifficult to manipulate without inadvertently ldquobreakingrdquothe modelrsquos teeth or ldquodislocatingrdquo the cervical spinebecause of the awkward hand motions required A rela-tively recent development is the HPS a mannequin-based simulator The HPS is similar to the existingACLS models but the neck and airway are often moreflexible and lifelike and can be made to deform and re-lax to simulate real scenarios The HPS can simulateheart and lung sounds and provide palpable pulses aswell as realistic chest movement The simulator is inter-active but requires real-time programming and feed-back from an instructor (Murray amp Schneider 1997)Utilizing a HPS combined with 3D AR visualization ofthe airway anatomy and the endotracheal tube para-medics will be able to obtain a visual and tactile sense ofproper ETI The UIH will allow paramedics to practicetheir skills and provide them with the visual feedbackthey could not obtain otherwise

Intubation on the HPS is shown in Figure 11a The

Figure 11 (a) A user training on an intubation procedure (b) Lung model superimposed on the HPS using AR (3D model of the Visible

Human Dataset lung Courtesy of Celina Imielinska Columbia University)

542 PRESENCE VOLUME 14 NUMBER 5

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 8: Development of Head-Mounted - University of Central Florida

sation of the rendered image may cause visual discom-fort if the pixels are resolvable as commonly encoun-tered in HMDs Thus distortion-free optics even todaywith high speed processing presents key advantages tomore natural perception in generally speaking HMDs

32 The Retro-Reflective ScreenAny Shape in Any Location

A key property of retro-reflective material is thatrays hitting the surface at any angle are reflected backonto the same incident optical path Thus theoreticallythe perception of the image is independent of the shapeand location of the screen made of such material Suchtechnology can thus be implemented with curved ortilted screens with no apparent distortion In practicedepending on the specifics of the retro-reflective mate-rial some dependence on the shape may be observedoutside a range of bending of the material Also imagequality (ie the sharpness of the image) is in practicelimited by the optical diffraction of the material (Mar-tins amp Rolland 2003) whose impact can be highly sig-nificant for large discrepancies in the location of the ma-terial with respect to the optical image projected by the

optics Thus ideally to minimize the effect of diffrac-tion the material must be placed in close proximity tothe images focused by the projector Furthermore de-pending on the specific properties of the material theamount of blurring quantified as the width of the pointspread function (PSF) may be more or less pronouncedfor the same discrepancy in the location of the projectedimages and the material as now quantified

An analysis of diffraction was conducted on two typesof material a micro-beaded type of material (ieScotchlite 3M Fabric Silver-beaded) and a microcorner-cube type of material (ie Scotchlite 3M FilmSilver-cubed) both shown on a microscopic scale inFigure 4b1 The analysis shown in Figure 4a quantifiesthe spread of light after retro-reflection due to diffrac-tion Measurements made in the laboratory and alsoreported in Figure 4b indicate a good correlation in theresults with the mathematical predictions From thisanalysis it is shown that the micro corner-cube materialwill be superior in maximizing the light throughput andminimizing loss in resolution due to diffraction In a

1 Scotchlite is a registered trademark of the 3M company

Figure 4 (a) Theoretical modeling of the point spread function (PSF) for two kinds of retro-reflective materials (b) 3D

(xyI) measured point spread functions a microscopic view of the two different types of materials is shown above the

PSFs as well as their basic 3D microstructure

Rolland et al 535

companion paper an analysis of human performance inresolving small details with both types of materials isreported (Fidopiastis Furhman Meyer amp Rolland2005)

Finally it is important to note that the FOV of theHMPD is invariant with the location of the material inthe environment and the apparent size of a 3D objectobtained from generating stereoscopic images will alsobe invariant with the location of the user with respect tothe material as long as the head of the user is tracked inposition and orientation as in any AR application

33 Optical Design of HMPDs

The optical design of any HMD including theHMPD is highly driven by the choice of the microdis-plays specifically their size resolution and means ofself-emitting light or requiring illumination optics Thesmaller the microdisplays the higher the required powerof the optics to achieve a given FOV and thus thehigher the number of elements required

The microdisplays and associated electronics firstavailable for this project were 135 in diagonal back-lighting color AM-LCDs (active matrix liquid crystaldisplays) with (640 3) 480 pixels and 42-m pixelsize While higher resolution would be preferred theavailability in size and color of this microdisplay weredeterminant for the choice made A 52deg FOV optics pereye with a 35-mm focal length optics was designed inorder to maintain a visual resolution of less than about 4arc minutes This choice resulted in a predicted 41 arcminutes per pixel in angular resolution horizontally andvertically (Hua Ha amp Rolland 2003) In spite of thelower resolution imposed by the microdisplay largerFOVs optics were explored For example a 70deg FOVoptics was investigated (Ha amp Rolland 2004) togetherwith a discussion of the properties of such optics (Rol-land Biocca et al 2004) Both designs were based onan ultralightweight four-element compact projectionoptics using a combination of DOEs plastic compo-nents and aspheric surfaces While plastic componentsare ideal to design an ultralight system its combinationwith glass components and DOEs also enables higher

image quality The total weight of each lens assemblywas only 6 g The mechanical dimensions of the 52degand 70deg FOVs optics were 20-mm in length by 18-mmin diameter and 15-mm in length by 134 mm in diame-ter respectively For both designs an analysis of perfor-mance determined that the polychromatic modulationtransfer functions displayed more than 40 contrast at25 line-pairsmm for a full size pupil of 12 mm Thedistortion was constrained to be less than 25 acrossthe overall visual fields in both cases

Finally whether the microdisplay is self-emitting orrequires illumination optics may impose additional con-straints on the design and compactness If the microdis-play acts as a mirror such as liquid crystal on silicon(LCOS) displays (Huang et al 2002) the projectionoptics diameter will be larger than the microdisplay toavoid vignetting the footprint of the telecentric lightbeam reflected off the LCOS as it passes through thelens Telecentric means that the central ray of the coneof light from each point in the field of view is parallel tothe optical axis before the LCOS display Furthermoresuch microdisplays require illumination optics and thusthey require additional optical and opto-mechanicalcomponents that will add weight complexity and costto the overall system Advantages of LCOS displays to-day however are their physical size (ie 1 in diago-nal) their brightness and their resolution

For each HMPD design (and this also applies toHMD design in general) the choice of the microdisplayis critical to the success of a final prototype or productand as such a choice must be application driven if theprototype or product are targeted at a real-world appli-cation

4 Design of HMPD Technologies forSpecific Applications The TeleportalHMDP (T-HMPD) and the Mobile HMPD(M-HMPD)

Nonverbal cues regarding what others are thinkingand where they are looking are key sources of informa-tion during real-time collaboration among workmates

536 PRESENCE VOLUME 14 NUMBER 5

especially when movements must be coordinated as incollaborative object manipulation The direction ofanotherrsquos gaze is a key source of information as to theotherrsquos visual attention For example it helps disambig-uate spatially ambiguous but common everyday phrasessuch as ldquoget the tool over thererdquo Facial expressions sup-plemented by language provide teammates with insightto the intentions moods and meanings communicatedby others

Video conferencing systems provide some informa-tion on visual expressions but fail to provide accuratecues of spatial attention and are poor at supportingphysical collaboration because collaborators lack a com-mon action space VR systems using HMPD displayscan better create a common action space and supportnaturalistic hand manipulation of 3D virtual objectsduring immersive collaboration but facial expressionsare partially occluded by the HMPD for local partici-pants and further masked for remote participants Akey challenge in immersive collaborative systems ishow to add the important information channels offacial expressions and visual attention into a distrib-uted AR interface An emerging version of theHMPD design tailored for face-to-face interactionand referred to as the teleportal HMPD (T-HMPD)will be described in Section 41

Another challenge in creating distributed collabora-tive environments is how to create mobile systems basedon HMPD technology given that collaboration mayalso take place as we navigate through a real environ-ment such as a museum or a city In such cases it is notpossible to position retro-reflective material strategicallyin the environment however it would be advantageousbased on the ultralightweight of the optics of HMPDsto expand the technology to mobile systems A mobileHMPD (M-HMPD) will be described in Section 42

41 The Teleportal HMPD (T-HMPD)

The T-HMPD integrates optical image process-ing and display technologies to capture transport andreconstruct an AR 3D model of the head and facial ex-

pression of a remote collaborator networked to a localpartner The T-HMPD is a hybrid optical and videosee-through HMD composed of an HMPD combinedwith a pair of lipstick video cameras and two miniaturemirrors mounted to the side and slightly forward of theface of the user as shown schematically in Figure 5(Biocca amp Rolland 2004) The configuration cap-tures stereoscopic video of the userrsquos face includingboth sides of the face without occlusion with mini-mal interference within the userrsquos visual field and onlyminimal occlusion of the face as viewed by otherphysical participants at the local site Unlike room-basedvideo the head-worn camera and mirror system cap-tures the full face of users no matter where they arelooking and regardless of their locations as long as thecameras are connected

Figure 6ab show the left and right views respec-tively of the lipstick cameras through the miniaturemirrors (ie 1 in in diameter in this case) in one of ourfirst tests The radius of curvature of the convex surfaceleading to the face capture shown was selected to be65-mm from applying basic optical imaging equationsbetween a small 4 mm focal length lipstick camera andthe face In the first implementation the lipstick videocameras were Sony Electronics DXCLS11 Adjustablerods were designed to mount the two mirrors in orderto experiment with various configurations of mirrorscamera lenses and distances from the face and the two

Figure 5 General layout of the minimally intrusive stereoscopic face

capture system

Rolland et al 537

mirrors Optimization of the optical face capture systemis under optimization for robustness minimization ofshadows on the face and placement of the mirror sideof the beam splitter to capture the face of any partici-

pant with no interference from the projector light or thebeam splitter Image processing algorithms under de-velopment at the MIND Lab (Media Interface and Net-work Design Lab) in collaboration with the ODA Lab

Figure 6 Face images captured using the FCS (a) left image and (b) right image (c) Virtual frontal view generated from two side view

cameras mounted in this case to a table as opposed to the head for feasibility study (Courtesy frontal view from C Reddy and G Stockman

Michigan State University)

538 PRESENCE VOLUME 14 NUMBER 5

(Optical Diagnostics and Applications Lab) unwrap thedistorted images from the cameras and produce a com-posite video texture from the two images (Reddy 2003Reddy Stockman Rolland amp Biocca 2004) A feasibil-ity of creating a frontal virtual view from two lipstickcameras mounted to the side and slightly forward of theface is shown in Figure 6c where the cameras in thiscase were mounted to a table The composite stereovideo texture can be sent directly via high bandwidthconnection or mapped to a 3D mesh of the userrsquos faceThe 3D face of the user can be projected in the localspace as a 3D AR object in the virtual environmentand placed in the analogous spatial relation to othersand virtual objects using the tracker data of the re-mote site Alternatively the teleportal 3D face-to-facemodel can be projected onto a head model covered inretro-reflective material or as a 3D video on the wallof an office or conference room as in traditional tele-conferencing systems As the algorithm for stereo facecapture and reconstruction matures we are preparingto test the algorithm in various presentation scenariosincluding a retro-reflective ball and head model andother embodiments of the remote others will be cre-ated In optimizing the presentation of informationto create the maximum sense of presence we mayinvestigate how to best display the remote faces Forexample we may employ a retro-reflective tabletopwhere 3D scientific visualization may be shared or aretro-reflective wall that would open a common win-dow to both distributed visualization and social envi-ronments

For local participants wearing HMPDs a participantwould see the eyes of the other participant illuminatedhowever the projector is too weak to shine light in theeyes of a side participant The illuminated eyes may beturned off automatically when two local participantsspeak face-to-face to each other (while their face is stillbeing captured) and the projector can be automaticallyturned back on when facing any retro-reflective mate-rial This feature has not yet been implemented Thereis no such issue for remote participants given how theface is captured

42 Mobile Head-Mounted ProjectionDisplay (M-HMPD)

In order to expand the use of HMPD to appli-cations that may not be able to allow placing of retro-reflective material in the environment a novel displaybased on the HMPD was recently conceived (RollandMartins amp Ha 2003) A schematic of the display isshown in Figure 7 The main novelty of the display liesin removing the fabric from the environment and solelyintegrating the retro-reflective material within theHMPD for imaging using additional optics between thebeam splitter and the material to optically image thematerial at a remote distance from the user A preferredlocation for the material is in coincidence with the mon-ocular virtual images of the HMPD to minimize theeffects of diffraction imposed by the microstructures ofthe optical material Without the additional optics weestimated that in the best case scenario visual acuitywould have been limited to about 10 minutes of arceven with a finer resolution of the microdisplay whichwould be visually unacceptable Thus when the materialwithin the HMPD is not used simply for increased illu-mination the imaging optics between the integratedmaterial and the beam splitter is absolutely required inorder to provide adequate overall resolution of theviewed images The additional optics is illustrated in

Figure 7 Conceptual design of a mobile HMPD (M-HMPD)

Rolland et al 539

Figure 7 with a Fresnel lens for low weight and com-pactness

Because the image formed by the projection optics isminimized at the location of the retro-reflective materialplaced within the M-HMPD a difference with the basicHMPD is that smaller microstructures are required (ieon the order of 10 m instead of 100 m) The de-tailed optical design of the lens is similar to that of thefirst ODA Lab HMPD optics except that the microdis-play chosen is a 06 in diagonal OLED and higher res-olution 800 600 pixels Details of the optical designwere reported elsewhere (Martins Ha amp Rolland2003) A challenge associated with the development ofthe M-HMPD is the design and fabrication of about10 m scale microstructure retro-reflective materialSuch microstructures are not commercially available andcustom-design materials are being investigated

Because of its stand-alone capability the M-HMPDextends the use of HMPDs to clinical guided surgerymedical simulation and training wearable computersmobile secure displays and distributed collaborative dis-plays as well as outdoor augmented see-through virtualenvironments The visual results of the M-HMPD com-pare in essence to that of eyepiece-based see-throughAR systems currently available The difference lieshowever in the higher image quality (ie lower blur) adistortion-free image and a more compact optics

5 A Review of CollaborativeApplications Medical Applicationsand Infospaces

The HMPD facilitates the development of collab-orative environments that allow seamless transitionsthrough different levels of immersion from AR to a fullvirtual reality experience (Milgram amp Kishino 1994Davis et al 2003) In the artificial reality center (ARC)users can navigate between various levels of immersionthat occur on the basis of where users position them-selves with respect to the retro-reflective material TheARC presented at ISMAR 2002 (Hamza-Lup et al2002) together with a remote collaboration applicationbuilt on top of DARE (distributed augmented realityenvironment) (Hamza-Lup Davis Hughes amp Rolland2002) consists of a generally curved or shaped retro-reflective wall an HMPD with miniature optics a com-mercially available optical tracking system and Linux-based PCs The ARCs may take different shapes andsizes and are importantly quickly deployable either in-doors or outdoors Since the year 2001 we built twodeployable (10 minute setup) displays 10-ft wide by7-ft high as well as a deployable (30 minute setup)15-ft diameter center networked to some of the otherARCs as shown in Figure 8 However all displays aim toprovide multiuser capability including remotely located

Figure 8 Networked open environments with artificial reality centers (NOEs ARCs)

540 PRESENCE VOLUME 14 NUMBER 5

users (Hamza-Lup amp Rolland 2004b) as well as 3Dmultisensory visualization including 3D sound hapticand smell (Rolland Stanney et al 2004) The userrsquosmotion becomes the computer interface device in con-trast to systems that may resort to various display de-vices (Billinghurst Kato amp Poupyrev 2001)

Variants of the HMPD optics targeted at specificapplications such as 3D medical visualization and dis-tributed AR medical training tools (Rolland Hamza-Lup et al 2003 Hamza-Lup Rolland amp Hughes2005) embedded training display technology for theArmyrsquos future combat vehicles (Rodrigez Foglia ampRolland 2003) 3D manufacturing design collabora-tion (Rolland Biocca et al 2004) have been devel-oped in our laboratory Also the HMPD in its origi-nal prototype form with a 52deg ultralightweight opticslies at the core of the Aztec Explorer application de-veloped at the 3DVIS Lab that investigates variousinteraction schemes within SCAPE (stereoscopic col-laboration in augmented and projective environ-ments) that consists of an integration of the HMPDtogether with the HiBall3000 head tracker by 3rdTech (www3rdtechcom) a SCAPE API developedin the 3DVIS Lab the CAVERN G2 API networkinglibrary (wwwopenchanelsoftwareorg) and a tracked5DT Dataglove (Hua Brown amp Gao 2004)

In summary the HMPD technology we have devel-oped currently provides a fine balance of affordabilityand unique capabilities such as (1) spanning the virtual-ity continuum allowing both full immersion and mixedreality which may open a set of new capabilities acrossvarious applications (2) enabling teleportal capabilitywith face-to-face interaction (3) creating ultralight-weight wide FOVs mobile and secure displays (4) creat-ing small low cost desktop technology or on largerscale quickly deployable 3D visualization centers or dis-plays

51 Medical Visualization andAugmented Reality Medical TrainingTools

In the ARCs multiple users may interact on thevisualization of 3D medical models as shown in Figure9 or practice procedures on AR human patient simula-tors (HPS) with a teaching module referred to as theultimate intubation head (UIH) as shown in Figure 10which we shall now further detail Because as detailedearlier in the paper the light projected from one userreturns only to that user there is no cross talk betweenusers and thus multiple users can coexist with no over-lapping graphics in the ARCs

Figure 9 Concept of multiusers interacting in the ARC Figure 10 Illustration of the endotracheal intubation training tool

Rolland et al 541

The UIH is a training tool in development in theODA Lab for endotracheal intubation (ETI) based onHMPD and ARC technology (Rolland Hamza-Lup etal 2003) (Hamza-Lup Rolland amp Hughes 2005) Itis aimed at medical students residents physician assis-tants pre-hospital care personnel nurse-anesthetistsexperienced physicians and any medical personnel whoneed to perform this common but critical procedure ina safe and rapid sequence

The system trains a wide range of clinicians in safelysecuring the airway during cardiopulmonary resuscita-tion as ensuring immediate ventilation andor oxygen-ation is critical for a number of reasons Firstly ETIwhich consists of inserting an endotracheal tubethrough the mouth into the trachea and then sealingthe trachea so that all air passes through the tube is alifesaving procedure Secondly the need for ETI canoccur in many places in and out of the hospital Per-haps the most important reason for training clinicians inETI however is the inherent difficulty associated withthe procedure (American Heart Association 1992Walls Barton amp McAfee 1999)

Current teaching methods lack flexibility in more

than one sense The most widely used model is a plasticor latex mannequin commonly used to teach advancedcardiac life support (ACLS) techniques including air-way management The neck and oropharynx are usuallydifficult to manipulate without inadvertently ldquobreakingrdquothe modelrsquos teeth or ldquodislocatingrdquo the cervical spinebecause of the awkward hand motions required A rela-tively recent development is the HPS a mannequin-based simulator The HPS is similar to the existingACLS models but the neck and airway are often moreflexible and lifelike and can be made to deform and re-lax to simulate real scenarios The HPS can simulateheart and lung sounds and provide palpable pulses aswell as realistic chest movement The simulator is inter-active but requires real-time programming and feed-back from an instructor (Murray amp Schneider 1997)Utilizing a HPS combined with 3D AR visualization ofthe airway anatomy and the endotracheal tube para-medics will be able to obtain a visual and tactile sense ofproper ETI The UIH will allow paramedics to practicetheir skills and provide them with the visual feedbackthey could not obtain otherwise

Intubation on the HPS is shown in Figure 11a The

Figure 11 (a) A user training on an intubation procedure (b) Lung model superimposed on the HPS using AR (3D model of the Visible

Human Dataset lung Courtesy of Celina Imielinska Columbia University)

542 PRESENCE VOLUME 14 NUMBER 5

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 9: Development of Head-Mounted - University of Central Florida

companion paper an analysis of human performance inresolving small details with both types of materials isreported (Fidopiastis Furhman Meyer amp Rolland2005)

Finally it is important to note that the FOV of theHMPD is invariant with the location of the material inthe environment and the apparent size of a 3D objectobtained from generating stereoscopic images will alsobe invariant with the location of the user with respect tothe material as long as the head of the user is tracked inposition and orientation as in any AR application

33 Optical Design of HMPDs

The optical design of any HMD including theHMPD is highly driven by the choice of the microdis-plays specifically their size resolution and means ofself-emitting light or requiring illumination optics Thesmaller the microdisplays the higher the required powerof the optics to achieve a given FOV and thus thehigher the number of elements required

The microdisplays and associated electronics firstavailable for this project were 135 in diagonal back-lighting color AM-LCDs (active matrix liquid crystaldisplays) with (640 3) 480 pixels and 42-m pixelsize While higher resolution would be preferred theavailability in size and color of this microdisplay weredeterminant for the choice made A 52deg FOV optics pereye with a 35-mm focal length optics was designed inorder to maintain a visual resolution of less than about 4arc minutes This choice resulted in a predicted 41 arcminutes per pixel in angular resolution horizontally andvertically (Hua Ha amp Rolland 2003) In spite of thelower resolution imposed by the microdisplay largerFOVs optics were explored For example a 70deg FOVoptics was investigated (Ha amp Rolland 2004) togetherwith a discussion of the properties of such optics (Rol-land Biocca et al 2004) Both designs were based onan ultralightweight four-element compact projectionoptics using a combination of DOEs plastic compo-nents and aspheric surfaces While plastic componentsare ideal to design an ultralight system its combinationwith glass components and DOEs also enables higher

image quality The total weight of each lens assemblywas only 6 g The mechanical dimensions of the 52degand 70deg FOVs optics were 20-mm in length by 18-mmin diameter and 15-mm in length by 134 mm in diame-ter respectively For both designs an analysis of perfor-mance determined that the polychromatic modulationtransfer functions displayed more than 40 contrast at25 line-pairsmm for a full size pupil of 12 mm Thedistortion was constrained to be less than 25 acrossthe overall visual fields in both cases

Finally whether the microdisplay is self-emitting orrequires illumination optics may impose additional con-straints on the design and compactness If the microdis-play acts as a mirror such as liquid crystal on silicon(LCOS) displays (Huang et al 2002) the projectionoptics diameter will be larger than the microdisplay toavoid vignetting the footprint of the telecentric lightbeam reflected off the LCOS as it passes through thelens Telecentric means that the central ray of the coneof light from each point in the field of view is parallel tothe optical axis before the LCOS display Furthermoresuch microdisplays require illumination optics and thusthey require additional optical and opto-mechanicalcomponents that will add weight complexity and costto the overall system Advantages of LCOS displays to-day however are their physical size (ie 1 in diago-nal) their brightness and their resolution

For each HMPD design (and this also applies toHMD design in general) the choice of the microdisplayis critical to the success of a final prototype or productand as such a choice must be application driven if theprototype or product are targeted at a real-world appli-cation

4 Design of HMPD Technologies forSpecific Applications The TeleportalHMDP (T-HMPD) and the Mobile HMPD(M-HMPD)

Nonverbal cues regarding what others are thinkingand where they are looking are key sources of informa-tion during real-time collaboration among workmates

536 PRESENCE VOLUME 14 NUMBER 5

especially when movements must be coordinated as incollaborative object manipulation The direction ofanotherrsquos gaze is a key source of information as to theotherrsquos visual attention For example it helps disambig-uate spatially ambiguous but common everyday phrasessuch as ldquoget the tool over thererdquo Facial expressions sup-plemented by language provide teammates with insightto the intentions moods and meanings communicatedby others

Video conferencing systems provide some informa-tion on visual expressions but fail to provide accuratecues of spatial attention and are poor at supportingphysical collaboration because collaborators lack a com-mon action space VR systems using HMPD displayscan better create a common action space and supportnaturalistic hand manipulation of 3D virtual objectsduring immersive collaboration but facial expressionsare partially occluded by the HMPD for local partici-pants and further masked for remote participants Akey challenge in immersive collaborative systems ishow to add the important information channels offacial expressions and visual attention into a distrib-uted AR interface An emerging version of theHMPD design tailored for face-to-face interactionand referred to as the teleportal HMPD (T-HMPD)will be described in Section 41

Another challenge in creating distributed collabora-tive environments is how to create mobile systems basedon HMPD technology given that collaboration mayalso take place as we navigate through a real environ-ment such as a museum or a city In such cases it is notpossible to position retro-reflective material strategicallyin the environment however it would be advantageousbased on the ultralightweight of the optics of HMPDsto expand the technology to mobile systems A mobileHMPD (M-HMPD) will be described in Section 42

41 The Teleportal HMPD (T-HMPD)

The T-HMPD integrates optical image process-ing and display technologies to capture transport andreconstruct an AR 3D model of the head and facial ex-

pression of a remote collaborator networked to a localpartner The T-HMPD is a hybrid optical and videosee-through HMD composed of an HMPD combinedwith a pair of lipstick video cameras and two miniaturemirrors mounted to the side and slightly forward of theface of the user as shown schematically in Figure 5(Biocca amp Rolland 2004) The configuration cap-tures stereoscopic video of the userrsquos face includingboth sides of the face without occlusion with mini-mal interference within the userrsquos visual field and onlyminimal occlusion of the face as viewed by otherphysical participants at the local site Unlike room-basedvideo the head-worn camera and mirror system cap-tures the full face of users no matter where they arelooking and regardless of their locations as long as thecameras are connected

Figure 6ab show the left and right views respec-tively of the lipstick cameras through the miniaturemirrors (ie 1 in in diameter in this case) in one of ourfirst tests The radius of curvature of the convex surfaceleading to the face capture shown was selected to be65-mm from applying basic optical imaging equationsbetween a small 4 mm focal length lipstick camera andthe face In the first implementation the lipstick videocameras were Sony Electronics DXCLS11 Adjustablerods were designed to mount the two mirrors in orderto experiment with various configurations of mirrorscamera lenses and distances from the face and the two

Figure 5 General layout of the minimally intrusive stereoscopic face

capture system

Rolland et al 537

mirrors Optimization of the optical face capture systemis under optimization for robustness minimization ofshadows on the face and placement of the mirror sideof the beam splitter to capture the face of any partici-

pant with no interference from the projector light or thebeam splitter Image processing algorithms under de-velopment at the MIND Lab (Media Interface and Net-work Design Lab) in collaboration with the ODA Lab

Figure 6 Face images captured using the FCS (a) left image and (b) right image (c) Virtual frontal view generated from two side view

cameras mounted in this case to a table as opposed to the head for feasibility study (Courtesy frontal view from C Reddy and G Stockman

Michigan State University)

538 PRESENCE VOLUME 14 NUMBER 5

(Optical Diagnostics and Applications Lab) unwrap thedistorted images from the cameras and produce a com-posite video texture from the two images (Reddy 2003Reddy Stockman Rolland amp Biocca 2004) A feasibil-ity of creating a frontal virtual view from two lipstickcameras mounted to the side and slightly forward of theface is shown in Figure 6c where the cameras in thiscase were mounted to a table The composite stereovideo texture can be sent directly via high bandwidthconnection or mapped to a 3D mesh of the userrsquos faceThe 3D face of the user can be projected in the localspace as a 3D AR object in the virtual environmentand placed in the analogous spatial relation to othersand virtual objects using the tracker data of the re-mote site Alternatively the teleportal 3D face-to-facemodel can be projected onto a head model covered inretro-reflective material or as a 3D video on the wallof an office or conference room as in traditional tele-conferencing systems As the algorithm for stereo facecapture and reconstruction matures we are preparingto test the algorithm in various presentation scenariosincluding a retro-reflective ball and head model andother embodiments of the remote others will be cre-ated In optimizing the presentation of informationto create the maximum sense of presence we mayinvestigate how to best display the remote faces Forexample we may employ a retro-reflective tabletopwhere 3D scientific visualization may be shared or aretro-reflective wall that would open a common win-dow to both distributed visualization and social envi-ronments

For local participants wearing HMPDs a participantwould see the eyes of the other participant illuminatedhowever the projector is too weak to shine light in theeyes of a side participant The illuminated eyes may beturned off automatically when two local participantsspeak face-to-face to each other (while their face is stillbeing captured) and the projector can be automaticallyturned back on when facing any retro-reflective mate-rial This feature has not yet been implemented Thereis no such issue for remote participants given how theface is captured

42 Mobile Head-Mounted ProjectionDisplay (M-HMPD)

In order to expand the use of HMPD to appli-cations that may not be able to allow placing of retro-reflective material in the environment a novel displaybased on the HMPD was recently conceived (RollandMartins amp Ha 2003) A schematic of the display isshown in Figure 7 The main novelty of the display liesin removing the fabric from the environment and solelyintegrating the retro-reflective material within theHMPD for imaging using additional optics between thebeam splitter and the material to optically image thematerial at a remote distance from the user A preferredlocation for the material is in coincidence with the mon-ocular virtual images of the HMPD to minimize theeffects of diffraction imposed by the microstructures ofthe optical material Without the additional optics weestimated that in the best case scenario visual acuitywould have been limited to about 10 minutes of arceven with a finer resolution of the microdisplay whichwould be visually unacceptable Thus when the materialwithin the HMPD is not used simply for increased illu-mination the imaging optics between the integratedmaterial and the beam splitter is absolutely required inorder to provide adequate overall resolution of theviewed images The additional optics is illustrated in

Figure 7 Conceptual design of a mobile HMPD (M-HMPD)

Rolland et al 539

Figure 7 with a Fresnel lens for low weight and com-pactness

Because the image formed by the projection optics isminimized at the location of the retro-reflective materialplaced within the M-HMPD a difference with the basicHMPD is that smaller microstructures are required (ieon the order of 10 m instead of 100 m) The de-tailed optical design of the lens is similar to that of thefirst ODA Lab HMPD optics except that the microdis-play chosen is a 06 in diagonal OLED and higher res-olution 800 600 pixels Details of the optical designwere reported elsewhere (Martins Ha amp Rolland2003) A challenge associated with the development ofthe M-HMPD is the design and fabrication of about10 m scale microstructure retro-reflective materialSuch microstructures are not commercially available andcustom-design materials are being investigated

Because of its stand-alone capability the M-HMPDextends the use of HMPDs to clinical guided surgerymedical simulation and training wearable computersmobile secure displays and distributed collaborative dis-plays as well as outdoor augmented see-through virtualenvironments The visual results of the M-HMPD com-pare in essence to that of eyepiece-based see-throughAR systems currently available The difference lieshowever in the higher image quality (ie lower blur) adistortion-free image and a more compact optics

5 A Review of CollaborativeApplications Medical Applicationsand Infospaces

The HMPD facilitates the development of collab-orative environments that allow seamless transitionsthrough different levels of immersion from AR to a fullvirtual reality experience (Milgram amp Kishino 1994Davis et al 2003) In the artificial reality center (ARC)users can navigate between various levels of immersionthat occur on the basis of where users position them-selves with respect to the retro-reflective material TheARC presented at ISMAR 2002 (Hamza-Lup et al2002) together with a remote collaboration applicationbuilt on top of DARE (distributed augmented realityenvironment) (Hamza-Lup Davis Hughes amp Rolland2002) consists of a generally curved or shaped retro-reflective wall an HMPD with miniature optics a com-mercially available optical tracking system and Linux-based PCs The ARCs may take different shapes andsizes and are importantly quickly deployable either in-doors or outdoors Since the year 2001 we built twodeployable (10 minute setup) displays 10-ft wide by7-ft high as well as a deployable (30 minute setup)15-ft diameter center networked to some of the otherARCs as shown in Figure 8 However all displays aim toprovide multiuser capability including remotely located

Figure 8 Networked open environments with artificial reality centers (NOEs ARCs)

540 PRESENCE VOLUME 14 NUMBER 5

users (Hamza-Lup amp Rolland 2004b) as well as 3Dmultisensory visualization including 3D sound hapticand smell (Rolland Stanney et al 2004) The userrsquosmotion becomes the computer interface device in con-trast to systems that may resort to various display de-vices (Billinghurst Kato amp Poupyrev 2001)

Variants of the HMPD optics targeted at specificapplications such as 3D medical visualization and dis-tributed AR medical training tools (Rolland Hamza-Lup et al 2003 Hamza-Lup Rolland amp Hughes2005) embedded training display technology for theArmyrsquos future combat vehicles (Rodrigez Foglia ampRolland 2003) 3D manufacturing design collabora-tion (Rolland Biocca et al 2004) have been devel-oped in our laboratory Also the HMPD in its origi-nal prototype form with a 52deg ultralightweight opticslies at the core of the Aztec Explorer application de-veloped at the 3DVIS Lab that investigates variousinteraction schemes within SCAPE (stereoscopic col-laboration in augmented and projective environ-ments) that consists of an integration of the HMPDtogether with the HiBall3000 head tracker by 3rdTech (www3rdtechcom) a SCAPE API developedin the 3DVIS Lab the CAVERN G2 API networkinglibrary (wwwopenchanelsoftwareorg) and a tracked5DT Dataglove (Hua Brown amp Gao 2004)

In summary the HMPD technology we have devel-oped currently provides a fine balance of affordabilityand unique capabilities such as (1) spanning the virtual-ity continuum allowing both full immersion and mixedreality which may open a set of new capabilities acrossvarious applications (2) enabling teleportal capabilitywith face-to-face interaction (3) creating ultralight-weight wide FOVs mobile and secure displays (4) creat-ing small low cost desktop technology or on largerscale quickly deployable 3D visualization centers or dis-plays

51 Medical Visualization andAugmented Reality Medical TrainingTools

In the ARCs multiple users may interact on thevisualization of 3D medical models as shown in Figure9 or practice procedures on AR human patient simula-tors (HPS) with a teaching module referred to as theultimate intubation head (UIH) as shown in Figure 10which we shall now further detail Because as detailedearlier in the paper the light projected from one userreturns only to that user there is no cross talk betweenusers and thus multiple users can coexist with no over-lapping graphics in the ARCs

Figure 9 Concept of multiusers interacting in the ARC Figure 10 Illustration of the endotracheal intubation training tool

Rolland et al 541

The UIH is a training tool in development in theODA Lab for endotracheal intubation (ETI) based onHMPD and ARC technology (Rolland Hamza-Lup etal 2003) (Hamza-Lup Rolland amp Hughes 2005) Itis aimed at medical students residents physician assis-tants pre-hospital care personnel nurse-anesthetistsexperienced physicians and any medical personnel whoneed to perform this common but critical procedure ina safe and rapid sequence

The system trains a wide range of clinicians in safelysecuring the airway during cardiopulmonary resuscita-tion as ensuring immediate ventilation andor oxygen-ation is critical for a number of reasons Firstly ETIwhich consists of inserting an endotracheal tubethrough the mouth into the trachea and then sealingthe trachea so that all air passes through the tube is alifesaving procedure Secondly the need for ETI canoccur in many places in and out of the hospital Per-haps the most important reason for training clinicians inETI however is the inherent difficulty associated withthe procedure (American Heart Association 1992Walls Barton amp McAfee 1999)

Current teaching methods lack flexibility in more

than one sense The most widely used model is a plasticor latex mannequin commonly used to teach advancedcardiac life support (ACLS) techniques including air-way management The neck and oropharynx are usuallydifficult to manipulate without inadvertently ldquobreakingrdquothe modelrsquos teeth or ldquodislocatingrdquo the cervical spinebecause of the awkward hand motions required A rela-tively recent development is the HPS a mannequin-based simulator The HPS is similar to the existingACLS models but the neck and airway are often moreflexible and lifelike and can be made to deform and re-lax to simulate real scenarios The HPS can simulateheart and lung sounds and provide palpable pulses aswell as realistic chest movement The simulator is inter-active but requires real-time programming and feed-back from an instructor (Murray amp Schneider 1997)Utilizing a HPS combined with 3D AR visualization ofthe airway anatomy and the endotracheal tube para-medics will be able to obtain a visual and tactile sense ofproper ETI The UIH will allow paramedics to practicetheir skills and provide them with the visual feedbackthey could not obtain otherwise

Intubation on the HPS is shown in Figure 11a The

Figure 11 (a) A user training on an intubation procedure (b) Lung model superimposed on the HPS using AR (3D model of the Visible

Human Dataset lung Courtesy of Celina Imielinska Columbia University)

542 PRESENCE VOLUME 14 NUMBER 5

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 10: Development of Head-Mounted - University of Central Florida

especially when movements must be coordinated as incollaborative object manipulation The direction ofanotherrsquos gaze is a key source of information as to theotherrsquos visual attention For example it helps disambig-uate spatially ambiguous but common everyday phrasessuch as ldquoget the tool over thererdquo Facial expressions sup-plemented by language provide teammates with insightto the intentions moods and meanings communicatedby others

Video conferencing systems provide some informa-tion on visual expressions but fail to provide accuratecues of spatial attention and are poor at supportingphysical collaboration because collaborators lack a com-mon action space VR systems using HMPD displayscan better create a common action space and supportnaturalistic hand manipulation of 3D virtual objectsduring immersive collaboration but facial expressionsare partially occluded by the HMPD for local partici-pants and further masked for remote participants Akey challenge in immersive collaborative systems ishow to add the important information channels offacial expressions and visual attention into a distrib-uted AR interface An emerging version of theHMPD design tailored for face-to-face interactionand referred to as the teleportal HMPD (T-HMPD)will be described in Section 41

Another challenge in creating distributed collabora-tive environments is how to create mobile systems basedon HMPD technology given that collaboration mayalso take place as we navigate through a real environ-ment such as a museum or a city In such cases it is notpossible to position retro-reflective material strategicallyin the environment however it would be advantageousbased on the ultralightweight of the optics of HMPDsto expand the technology to mobile systems A mobileHMPD (M-HMPD) will be described in Section 42

41 The Teleportal HMPD (T-HMPD)

The T-HMPD integrates optical image process-ing and display technologies to capture transport andreconstruct an AR 3D model of the head and facial ex-

pression of a remote collaborator networked to a localpartner The T-HMPD is a hybrid optical and videosee-through HMD composed of an HMPD combinedwith a pair of lipstick video cameras and two miniaturemirrors mounted to the side and slightly forward of theface of the user as shown schematically in Figure 5(Biocca amp Rolland 2004) The configuration cap-tures stereoscopic video of the userrsquos face includingboth sides of the face without occlusion with mini-mal interference within the userrsquos visual field and onlyminimal occlusion of the face as viewed by otherphysical participants at the local site Unlike room-basedvideo the head-worn camera and mirror system cap-tures the full face of users no matter where they arelooking and regardless of their locations as long as thecameras are connected

Figure 6ab show the left and right views respec-tively of the lipstick cameras through the miniaturemirrors (ie 1 in in diameter in this case) in one of ourfirst tests The radius of curvature of the convex surfaceleading to the face capture shown was selected to be65-mm from applying basic optical imaging equationsbetween a small 4 mm focal length lipstick camera andthe face In the first implementation the lipstick videocameras were Sony Electronics DXCLS11 Adjustablerods were designed to mount the two mirrors in orderto experiment with various configurations of mirrorscamera lenses and distances from the face and the two

Figure 5 General layout of the minimally intrusive stereoscopic face

capture system

Rolland et al 537

mirrors Optimization of the optical face capture systemis under optimization for robustness minimization ofshadows on the face and placement of the mirror sideof the beam splitter to capture the face of any partici-

pant with no interference from the projector light or thebeam splitter Image processing algorithms under de-velopment at the MIND Lab (Media Interface and Net-work Design Lab) in collaboration with the ODA Lab

Figure 6 Face images captured using the FCS (a) left image and (b) right image (c) Virtual frontal view generated from two side view

cameras mounted in this case to a table as opposed to the head for feasibility study (Courtesy frontal view from C Reddy and G Stockman

Michigan State University)

538 PRESENCE VOLUME 14 NUMBER 5

(Optical Diagnostics and Applications Lab) unwrap thedistorted images from the cameras and produce a com-posite video texture from the two images (Reddy 2003Reddy Stockman Rolland amp Biocca 2004) A feasibil-ity of creating a frontal virtual view from two lipstickcameras mounted to the side and slightly forward of theface is shown in Figure 6c where the cameras in thiscase were mounted to a table The composite stereovideo texture can be sent directly via high bandwidthconnection or mapped to a 3D mesh of the userrsquos faceThe 3D face of the user can be projected in the localspace as a 3D AR object in the virtual environmentand placed in the analogous spatial relation to othersand virtual objects using the tracker data of the re-mote site Alternatively the teleportal 3D face-to-facemodel can be projected onto a head model covered inretro-reflective material or as a 3D video on the wallof an office or conference room as in traditional tele-conferencing systems As the algorithm for stereo facecapture and reconstruction matures we are preparingto test the algorithm in various presentation scenariosincluding a retro-reflective ball and head model andother embodiments of the remote others will be cre-ated In optimizing the presentation of informationto create the maximum sense of presence we mayinvestigate how to best display the remote faces Forexample we may employ a retro-reflective tabletopwhere 3D scientific visualization may be shared or aretro-reflective wall that would open a common win-dow to both distributed visualization and social envi-ronments

For local participants wearing HMPDs a participantwould see the eyes of the other participant illuminatedhowever the projector is too weak to shine light in theeyes of a side participant The illuminated eyes may beturned off automatically when two local participantsspeak face-to-face to each other (while their face is stillbeing captured) and the projector can be automaticallyturned back on when facing any retro-reflective mate-rial This feature has not yet been implemented Thereis no such issue for remote participants given how theface is captured

42 Mobile Head-Mounted ProjectionDisplay (M-HMPD)

In order to expand the use of HMPD to appli-cations that may not be able to allow placing of retro-reflective material in the environment a novel displaybased on the HMPD was recently conceived (RollandMartins amp Ha 2003) A schematic of the display isshown in Figure 7 The main novelty of the display liesin removing the fabric from the environment and solelyintegrating the retro-reflective material within theHMPD for imaging using additional optics between thebeam splitter and the material to optically image thematerial at a remote distance from the user A preferredlocation for the material is in coincidence with the mon-ocular virtual images of the HMPD to minimize theeffects of diffraction imposed by the microstructures ofthe optical material Without the additional optics weestimated that in the best case scenario visual acuitywould have been limited to about 10 minutes of arceven with a finer resolution of the microdisplay whichwould be visually unacceptable Thus when the materialwithin the HMPD is not used simply for increased illu-mination the imaging optics between the integratedmaterial and the beam splitter is absolutely required inorder to provide adequate overall resolution of theviewed images The additional optics is illustrated in

Figure 7 Conceptual design of a mobile HMPD (M-HMPD)

Rolland et al 539

Figure 7 with a Fresnel lens for low weight and com-pactness

Because the image formed by the projection optics isminimized at the location of the retro-reflective materialplaced within the M-HMPD a difference with the basicHMPD is that smaller microstructures are required (ieon the order of 10 m instead of 100 m) The de-tailed optical design of the lens is similar to that of thefirst ODA Lab HMPD optics except that the microdis-play chosen is a 06 in diagonal OLED and higher res-olution 800 600 pixels Details of the optical designwere reported elsewhere (Martins Ha amp Rolland2003) A challenge associated with the development ofthe M-HMPD is the design and fabrication of about10 m scale microstructure retro-reflective materialSuch microstructures are not commercially available andcustom-design materials are being investigated

Because of its stand-alone capability the M-HMPDextends the use of HMPDs to clinical guided surgerymedical simulation and training wearable computersmobile secure displays and distributed collaborative dis-plays as well as outdoor augmented see-through virtualenvironments The visual results of the M-HMPD com-pare in essence to that of eyepiece-based see-throughAR systems currently available The difference lieshowever in the higher image quality (ie lower blur) adistortion-free image and a more compact optics

5 A Review of CollaborativeApplications Medical Applicationsand Infospaces

The HMPD facilitates the development of collab-orative environments that allow seamless transitionsthrough different levels of immersion from AR to a fullvirtual reality experience (Milgram amp Kishino 1994Davis et al 2003) In the artificial reality center (ARC)users can navigate between various levels of immersionthat occur on the basis of where users position them-selves with respect to the retro-reflective material TheARC presented at ISMAR 2002 (Hamza-Lup et al2002) together with a remote collaboration applicationbuilt on top of DARE (distributed augmented realityenvironment) (Hamza-Lup Davis Hughes amp Rolland2002) consists of a generally curved or shaped retro-reflective wall an HMPD with miniature optics a com-mercially available optical tracking system and Linux-based PCs The ARCs may take different shapes andsizes and are importantly quickly deployable either in-doors or outdoors Since the year 2001 we built twodeployable (10 minute setup) displays 10-ft wide by7-ft high as well as a deployable (30 minute setup)15-ft diameter center networked to some of the otherARCs as shown in Figure 8 However all displays aim toprovide multiuser capability including remotely located

Figure 8 Networked open environments with artificial reality centers (NOEs ARCs)

540 PRESENCE VOLUME 14 NUMBER 5

users (Hamza-Lup amp Rolland 2004b) as well as 3Dmultisensory visualization including 3D sound hapticand smell (Rolland Stanney et al 2004) The userrsquosmotion becomes the computer interface device in con-trast to systems that may resort to various display de-vices (Billinghurst Kato amp Poupyrev 2001)

Variants of the HMPD optics targeted at specificapplications such as 3D medical visualization and dis-tributed AR medical training tools (Rolland Hamza-Lup et al 2003 Hamza-Lup Rolland amp Hughes2005) embedded training display technology for theArmyrsquos future combat vehicles (Rodrigez Foglia ampRolland 2003) 3D manufacturing design collabora-tion (Rolland Biocca et al 2004) have been devel-oped in our laboratory Also the HMPD in its origi-nal prototype form with a 52deg ultralightweight opticslies at the core of the Aztec Explorer application de-veloped at the 3DVIS Lab that investigates variousinteraction schemes within SCAPE (stereoscopic col-laboration in augmented and projective environ-ments) that consists of an integration of the HMPDtogether with the HiBall3000 head tracker by 3rdTech (www3rdtechcom) a SCAPE API developedin the 3DVIS Lab the CAVERN G2 API networkinglibrary (wwwopenchanelsoftwareorg) and a tracked5DT Dataglove (Hua Brown amp Gao 2004)

In summary the HMPD technology we have devel-oped currently provides a fine balance of affordabilityand unique capabilities such as (1) spanning the virtual-ity continuum allowing both full immersion and mixedreality which may open a set of new capabilities acrossvarious applications (2) enabling teleportal capabilitywith face-to-face interaction (3) creating ultralight-weight wide FOVs mobile and secure displays (4) creat-ing small low cost desktop technology or on largerscale quickly deployable 3D visualization centers or dis-plays

51 Medical Visualization andAugmented Reality Medical TrainingTools

In the ARCs multiple users may interact on thevisualization of 3D medical models as shown in Figure9 or practice procedures on AR human patient simula-tors (HPS) with a teaching module referred to as theultimate intubation head (UIH) as shown in Figure 10which we shall now further detail Because as detailedearlier in the paper the light projected from one userreturns only to that user there is no cross talk betweenusers and thus multiple users can coexist with no over-lapping graphics in the ARCs

Figure 9 Concept of multiusers interacting in the ARC Figure 10 Illustration of the endotracheal intubation training tool

Rolland et al 541

The UIH is a training tool in development in theODA Lab for endotracheal intubation (ETI) based onHMPD and ARC technology (Rolland Hamza-Lup etal 2003) (Hamza-Lup Rolland amp Hughes 2005) Itis aimed at medical students residents physician assis-tants pre-hospital care personnel nurse-anesthetistsexperienced physicians and any medical personnel whoneed to perform this common but critical procedure ina safe and rapid sequence

The system trains a wide range of clinicians in safelysecuring the airway during cardiopulmonary resuscita-tion as ensuring immediate ventilation andor oxygen-ation is critical for a number of reasons Firstly ETIwhich consists of inserting an endotracheal tubethrough the mouth into the trachea and then sealingthe trachea so that all air passes through the tube is alifesaving procedure Secondly the need for ETI canoccur in many places in and out of the hospital Per-haps the most important reason for training clinicians inETI however is the inherent difficulty associated withthe procedure (American Heart Association 1992Walls Barton amp McAfee 1999)

Current teaching methods lack flexibility in more

than one sense The most widely used model is a plasticor latex mannequin commonly used to teach advancedcardiac life support (ACLS) techniques including air-way management The neck and oropharynx are usuallydifficult to manipulate without inadvertently ldquobreakingrdquothe modelrsquos teeth or ldquodislocatingrdquo the cervical spinebecause of the awkward hand motions required A rela-tively recent development is the HPS a mannequin-based simulator The HPS is similar to the existingACLS models but the neck and airway are often moreflexible and lifelike and can be made to deform and re-lax to simulate real scenarios The HPS can simulateheart and lung sounds and provide palpable pulses aswell as realistic chest movement The simulator is inter-active but requires real-time programming and feed-back from an instructor (Murray amp Schneider 1997)Utilizing a HPS combined with 3D AR visualization ofthe airway anatomy and the endotracheal tube para-medics will be able to obtain a visual and tactile sense ofproper ETI The UIH will allow paramedics to practicetheir skills and provide them with the visual feedbackthey could not obtain otherwise

Intubation on the HPS is shown in Figure 11a The

Figure 11 (a) A user training on an intubation procedure (b) Lung model superimposed on the HPS using AR (3D model of the Visible

Human Dataset lung Courtesy of Celina Imielinska Columbia University)

542 PRESENCE VOLUME 14 NUMBER 5

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 11: Development of Head-Mounted - University of Central Florida

mirrors Optimization of the optical face capture systemis under optimization for robustness minimization ofshadows on the face and placement of the mirror sideof the beam splitter to capture the face of any partici-

pant with no interference from the projector light or thebeam splitter Image processing algorithms under de-velopment at the MIND Lab (Media Interface and Net-work Design Lab) in collaboration with the ODA Lab

Figure 6 Face images captured using the FCS (a) left image and (b) right image (c) Virtual frontal view generated from two side view

cameras mounted in this case to a table as opposed to the head for feasibility study (Courtesy frontal view from C Reddy and G Stockman

Michigan State University)

538 PRESENCE VOLUME 14 NUMBER 5

(Optical Diagnostics and Applications Lab) unwrap thedistorted images from the cameras and produce a com-posite video texture from the two images (Reddy 2003Reddy Stockman Rolland amp Biocca 2004) A feasibil-ity of creating a frontal virtual view from two lipstickcameras mounted to the side and slightly forward of theface is shown in Figure 6c where the cameras in thiscase were mounted to a table The composite stereovideo texture can be sent directly via high bandwidthconnection or mapped to a 3D mesh of the userrsquos faceThe 3D face of the user can be projected in the localspace as a 3D AR object in the virtual environmentand placed in the analogous spatial relation to othersand virtual objects using the tracker data of the re-mote site Alternatively the teleportal 3D face-to-facemodel can be projected onto a head model covered inretro-reflective material or as a 3D video on the wallof an office or conference room as in traditional tele-conferencing systems As the algorithm for stereo facecapture and reconstruction matures we are preparingto test the algorithm in various presentation scenariosincluding a retro-reflective ball and head model andother embodiments of the remote others will be cre-ated In optimizing the presentation of informationto create the maximum sense of presence we mayinvestigate how to best display the remote faces Forexample we may employ a retro-reflective tabletopwhere 3D scientific visualization may be shared or aretro-reflective wall that would open a common win-dow to both distributed visualization and social envi-ronments

For local participants wearing HMPDs a participantwould see the eyes of the other participant illuminatedhowever the projector is too weak to shine light in theeyes of a side participant The illuminated eyes may beturned off automatically when two local participantsspeak face-to-face to each other (while their face is stillbeing captured) and the projector can be automaticallyturned back on when facing any retro-reflective mate-rial This feature has not yet been implemented Thereis no such issue for remote participants given how theface is captured

42 Mobile Head-Mounted ProjectionDisplay (M-HMPD)

In order to expand the use of HMPD to appli-cations that may not be able to allow placing of retro-reflective material in the environment a novel displaybased on the HMPD was recently conceived (RollandMartins amp Ha 2003) A schematic of the display isshown in Figure 7 The main novelty of the display liesin removing the fabric from the environment and solelyintegrating the retro-reflective material within theHMPD for imaging using additional optics between thebeam splitter and the material to optically image thematerial at a remote distance from the user A preferredlocation for the material is in coincidence with the mon-ocular virtual images of the HMPD to minimize theeffects of diffraction imposed by the microstructures ofthe optical material Without the additional optics weestimated that in the best case scenario visual acuitywould have been limited to about 10 minutes of arceven with a finer resolution of the microdisplay whichwould be visually unacceptable Thus when the materialwithin the HMPD is not used simply for increased illu-mination the imaging optics between the integratedmaterial and the beam splitter is absolutely required inorder to provide adequate overall resolution of theviewed images The additional optics is illustrated in

Figure 7 Conceptual design of a mobile HMPD (M-HMPD)

Rolland et al 539

Figure 7 with a Fresnel lens for low weight and com-pactness

Because the image formed by the projection optics isminimized at the location of the retro-reflective materialplaced within the M-HMPD a difference with the basicHMPD is that smaller microstructures are required (ieon the order of 10 m instead of 100 m) The de-tailed optical design of the lens is similar to that of thefirst ODA Lab HMPD optics except that the microdis-play chosen is a 06 in diagonal OLED and higher res-olution 800 600 pixels Details of the optical designwere reported elsewhere (Martins Ha amp Rolland2003) A challenge associated with the development ofthe M-HMPD is the design and fabrication of about10 m scale microstructure retro-reflective materialSuch microstructures are not commercially available andcustom-design materials are being investigated

Because of its stand-alone capability the M-HMPDextends the use of HMPDs to clinical guided surgerymedical simulation and training wearable computersmobile secure displays and distributed collaborative dis-plays as well as outdoor augmented see-through virtualenvironments The visual results of the M-HMPD com-pare in essence to that of eyepiece-based see-throughAR systems currently available The difference lieshowever in the higher image quality (ie lower blur) adistortion-free image and a more compact optics

5 A Review of CollaborativeApplications Medical Applicationsand Infospaces

The HMPD facilitates the development of collab-orative environments that allow seamless transitionsthrough different levels of immersion from AR to a fullvirtual reality experience (Milgram amp Kishino 1994Davis et al 2003) In the artificial reality center (ARC)users can navigate between various levels of immersionthat occur on the basis of where users position them-selves with respect to the retro-reflective material TheARC presented at ISMAR 2002 (Hamza-Lup et al2002) together with a remote collaboration applicationbuilt on top of DARE (distributed augmented realityenvironment) (Hamza-Lup Davis Hughes amp Rolland2002) consists of a generally curved or shaped retro-reflective wall an HMPD with miniature optics a com-mercially available optical tracking system and Linux-based PCs The ARCs may take different shapes andsizes and are importantly quickly deployable either in-doors or outdoors Since the year 2001 we built twodeployable (10 minute setup) displays 10-ft wide by7-ft high as well as a deployable (30 minute setup)15-ft diameter center networked to some of the otherARCs as shown in Figure 8 However all displays aim toprovide multiuser capability including remotely located

Figure 8 Networked open environments with artificial reality centers (NOEs ARCs)

540 PRESENCE VOLUME 14 NUMBER 5

users (Hamza-Lup amp Rolland 2004b) as well as 3Dmultisensory visualization including 3D sound hapticand smell (Rolland Stanney et al 2004) The userrsquosmotion becomes the computer interface device in con-trast to systems that may resort to various display de-vices (Billinghurst Kato amp Poupyrev 2001)

Variants of the HMPD optics targeted at specificapplications such as 3D medical visualization and dis-tributed AR medical training tools (Rolland Hamza-Lup et al 2003 Hamza-Lup Rolland amp Hughes2005) embedded training display technology for theArmyrsquos future combat vehicles (Rodrigez Foglia ampRolland 2003) 3D manufacturing design collabora-tion (Rolland Biocca et al 2004) have been devel-oped in our laboratory Also the HMPD in its origi-nal prototype form with a 52deg ultralightweight opticslies at the core of the Aztec Explorer application de-veloped at the 3DVIS Lab that investigates variousinteraction schemes within SCAPE (stereoscopic col-laboration in augmented and projective environ-ments) that consists of an integration of the HMPDtogether with the HiBall3000 head tracker by 3rdTech (www3rdtechcom) a SCAPE API developedin the 3DVIS Lab the CAVERN G2 API networkinglibrary (wwwopenchanelsoftwareorg) and a tracked5DT Dataglove (Hua Brown amp Gao 2004)

In summary the HMPD technology we have devel-oped currently provides a fine balance of affordabilityand unique capabilities such as (1) spanning the virtual-ity continuum allowing both full immersion and mixedreality which may open a set of new capabilities acrossvarious applications (2) enabling teleportal capabilitywith face-to-face interaction (3) creating ultralight-weight wide FOVs mobile and secure displays (4) creat-ing small low cost desktop technology or on largerscale quickly deployable 3D visualization centers or dis-plays

51 Medical Visualization andAugmented Reality Medical TrainingTools

In the ARCs multiple users may interact on thevisualization of 3D medical models as shown in Figure9 or practice procedures on AR human patient simula-tors (HPS) with a teaching module referred to as theultimate intubation head (UIH) as shown in Figure 10which we shall now further detail Because as detailedearlier in the paper the light projected from one userreturns only to that user there is no cross talk betweenusers and thus multiple users can coexist with no over-lapping graphics in the ARCs

Figure 9 Concept of multiusers interacting in the ARC Figure 10 Illustration of the endotracheal intubation training tool

Rolland et al 541

The UIH is a training tool in development in theODA Lab for endotracheal intubation (ETI) based onHMPD and ARC technology (Rolland Hamza-Lup etal 2003) (Hamza-Lup Rolland amp Hughes 2005) Itis aimed at medical students residents physician assis-tants pre-hospital care personnel nurse-anesthetistsexperienced physicians and any medical personnel whoneed to perform this common but critical procedure ina safe and rapid sequence

The system trains a wide range of clinicians in safelysecuring the airway during cardiopulmonary resuscita-tion as ensuring immediate ventilation andor oxygen-ation is critical for a number of reasons Firstly ETIwhich consists of inserting an endotracheal tubethrough the mouth into the trachea and then sealingthe trachea so that all air passes through the tube is alifesaving procedure Secondly the need for ETI canoccur in many places in and out of the hospital Per-haps the most important reason for training clinicians inETI however is the inherent difficulty associated withthe procedure (American Heart Association 1992Walls Barton amp McAfee 1999)

Current teaching methods lack flexibility in more

than one sense The most widely used model is a plasticor latex mannequin commonly used to teach advancedcardiac life support (ACLS) techniques including air-way management The neck and oropharynx are usuallydifficult to manipulate without inadvertently ldquobreakingrdquothe modelrsquos teeth or ldquodislocatingrdquo the cervical spinebecause of the awkward hand motions required A rela-tively recent development is the HPS a mannequin-based simulator The HPS is similar to the existingACLS models but the neck and airway are often moreflexible and lifelike and can be made to deform and re-lax to simulate real scenarios The HPS can simulateheart and lung sounds and provide palpable pulses aswell as realistic chest movement The simulator is inter-active but requires real-time programming and feed-back from an instructor (Murray amp Schneider 1997)Utilizing a HPS combined with 3D AR visualization ofthe airway anatomy and the endotracheal tube para-medics will be able to obtain a visual and tactile sense ofproper ETI The UIH will allow paramedics to practicetheir skills and provide them with the visual feedbackthey could not obtain otherwise

Intubation on the HPS is shown in Figure 11a The

Figure 11 (a) A user training on an intubation procedure (b) Lung model superimposed on the HPS using AR (3D model of the Visible

Human Dataset lung Courtesy of Celina Imielinska Columbia University)

542 PRESENCE VOLUME 14 NUMBER 5

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 12: Development of Head-Mounted - University of Central Florida

(Optical Diagnostics and Applications Lab) unwrap thedistorted images from the cameras and produce a com-posite video texture from the two images (Reddy 2003Reddy Stockman Rolland amp Biocca 2004) A feasibil-ity of creating a frontal virtual view from two lipstickcameras mounted to the side and slightly forward of theface is shown in Figure 6c where the cameras in thiscase were mounted to a table The composite stereovideo texture can be sent directly via high bandwidthconnection or mapped to a 3D mesh of the userrsquos faceThe 3D face of the user can be projected in the localspace as a 3D AR object in the virtual environmentand placed in the analogous spatial relation to othersand virtual objects using the tracker data of the re-mote site Alternatively the teleportal 3D face-to-facemodel can be projected onto a head model covered inretro-reflective material or as a 3D video on the wallof an office or conference room as in traditional tele-conferencing systems As the algorithm for stereo facecapture and reconstruction matures we are preparingto test the algorithm in various presentation scenariosincluding a retro-reflective ball and head model andother embodiments of the remote others will be cre-ated In optimizing the presentation of informationto create the maximum sense of presence we mayinvestigate how to best display the remote faces Forexample we may employ a retro-reflective tabletopwhere 3D scientific visualization may be shared or aretro-reflective wall that would open a common win-dow to both distributed visualization and social envi-ronments

For local participants wearing HMPDs a participantwould see the eyes of the other participant illuminatedhowever the projector is too weak to shine light in theeyes of a side participant The illuminated eyes may beturned off automatically when two local participantsspeak face-to-face to each other (while their face is stillbeing captured) and the projector can be automaticallyturned back on when facing any retro-reflective mate-rial This feature has not yet been implemented Thereis no such issue for remote participants given how theface is captured

42 Mobile Head-Mounted ProjectionDisplay (M-HMPD)

In order to expand the use of HMPD to appli-cations that may not be able to allow placing of retro-reflective material in the environment a novel displaybased on the HMPD was recently conceived (RollandMartins amp Ha 2003) A schematic of the display isshown in Figure 7 The main novelty of the display liesin removing the fabric from the environment and solelyintegrating the retro-reflective material within theHMPD for imaging using additional optics between thebeam splitter and the material to optically image thematerial at a remote distance from the user A preferredlocation for the material is in coincidence with the mon-ocular virtual images of the HMPD to minimize theeffects of diffraction imposed by the microstructures ofthe optical material Without the additional optics weestimated that in the best case scenario visual acuitywould have been limited to about 10 minutes of arceven with a finer resolution of the microdisplay whichwould be visually unacceptable Thus when the materialwithin the HMPD is not used simply for increased illu-mination the imaging optics between the integratedmaterial and the beam splitter is absolutely required inorder to provide adequate overall resolution of theviewed images The additional optics is illustrated in

Figure 7 Conceptual design of a mobile HMPD (M-HMPD)

Rolland et al 539

Figure 7 with a Fresnel lens for low weight and com-pactness

Because the image formed by the projection optics isminimized at the location of the retro-reflective materialplaced within the M-HMPD a difference with the basicHMPD is that smaller microstructures are required (ieon the order of 10 m instead of 100 m) The de-tailed optical design of the lens is similar to that of thefirst ODA Lab HMPD optics except that the microdis-play chosen is a 06 in diagonal OLED and higher res-olution 800 600 pixels Details of the optical designwere reported elsewhere (Martins Ha amp Rolland2003) A challenge associated with the development ofthe M-HMPD is the design and fabrication of about10 m scale microstructure retro-reflective materialSuch microstructures are not commercially available andcustom-design materials are being investigated

Because of its stand-alone capability the M-HMPDextends the use of HMPDs to clinical guided surgerymedical simulation and training wearable computersmobile secure displays and distributed collaborative dis-plays as well as outdoor augmented see-through virtualenvironments The visual results of the M-HMPD com-pare in essence to that of eyepiece-based see-throughAR systems currently available The difference lieshowever in the higher image quality (ie lower blur) adistortion-free image and a more compact optics

5 A Review of CollaborativeApplications Medical Applicationsand Infospaces

The HMPD facilitates the development of collab-orative environments that allow seamless transitionsthrough different levels of immersion from AR to a fullvirtual reality experience (Milgram amp Kishino 1994Davis et al 2003) In the artificial reality center (ARC)users can navigate between various levels of immersionthat occur on the basis of where users position them-selves with respect to the retro-reflective material TheARC presented at ISMAR 2002 (Hamza-Lup et al2002) together with a remote collaboration applicationbuilt on top of DARE (distributed augmented realityenvironment) (Hamza-Lup Davis Hughes amp Rolland2002) consists of a generally curved or shaped retro-reflective wall an HMPD with miniature optics a com-mercially available optical tracking system and Linux-based PCs The ARCs may take different shapes andsizes and are importantly quickly deployable either in-doors or outdoors Since the year 2001 we built twodeployable (10 minute setup) displays 10-ft wide by7-ft high as well as a deployable (30 minute setup)15-ft diameter center networked to some of the otherARCs as shown in Figure 8 However all displays aim toprovide multiuser capability including remotely located

Figure 8 Networked open environments with artificial reality centers (NOEs ARCs)

540 PRESENCE VOLUME 14 NUMBER 5

users (Hamza-Lup amp Rolland 2004b) as well as 3Dmultisensory visualization including 3D sound hapticand smell (Rolland Stanney et al 2004) The userrsquosmotion becomes the computer interface device in con-trast to systems that may resort to various display de-vices (Billinghurst Kato amp Poupyrev 2001)

Variants of the HMPD optics targeted at specificapplications such as 3D medical visualization and dis-tributed AR medical training tools (Rolland Hamza-Lup et al 2003 Hamza-Lup Rolland amp Hughes2005) embedded training display technology for theArmyrsquos future combat vehicles (Rodrigez Foglia ampRolland 2003) 3D manufacturing design collabora-tion (Rolland Biocca et al 2004) have been devel-oped in our laboratory Also the HMPD in its origi-nal prototype form with a 52deg ultralightweight opticslies at the core of the Aztec Explorer application de-veloped at the 3DVIS Lab that investigates variousinteraction schemes within SCAPE (stereoscopic col-laboration in augmented and projective environ-ments) that consists of an integration of the HMPDtogether with the HiBall3000 head tracker by 3rdTech (www3rdtechcom) a SCAPE API developedin the 3DVIS Lab the CAVERN G2 API networkinglibrary (wwwopenchanelsoftwareorg) and a tracked5DT Dataglove (Hua Brown amp Gao 2004)

In summary the HMPD technology we have devel-oped currently provides a fine balance of affordabilityand unique capabilities such as (1) spanning the virtual-ity continuum allowing both full immersion and mixedreality which may open a set of new capabilities acrossvarious applications (2) enabling teleportal capabilitywith face-to-face interaction (3) creating ultralight-weight wide FOVs mobile and secure displays (4) creat-ing small low cost desktop technology or on largerscale quickly deployable 3D visualization centers or dis-plays

51 Medical Visualization andAugmented Reality Medical TrainingTools

In the ARCs multiple users may interact on thevisualization of 3D medical models as shown in Figure9 or practice procedures on AR human patient simula-tors (HPS) with a teaching module referred to as theultimate intubation head (UIH) as shown in Figure 10which we shall now further detail Because as detailedearlier in the paper the light projected from one userreturns only to that user there is no cross talk betweenusers and thus multiple users can coexist with no over-lapping graphics in the ARCs

Figure 9 Concept of multiusers interacting in the ARC Figure 10 Illustration of the endotracheal intubation training tool

Rolland et al 541

The UIH is a training tool in development in theODA Lab for endotracheal intubation (ETI) based onHMPD and ARC technology (Rolland Hamza-Lup etal 2003) (Hamza-Lup Rolland amp Hughes 2005) Itis aimed at medical students residents physician assis-tants pre-hospital care personnel nurse-anesthetistsexperienced physicians and any medical personnel whoneed to perform this common but critical procedure ina safe and rapid sequence

The system trains a wide range of clinicians in safelysecuring the airway during cardiopulmonary resuscita-tion as ensuring immediate ventilation andor oxygen-ation is critical for a number of reasons Firstly ETIwhich consists of inserting an endotracheal tubethrough the mouth into the trachea and then sealingthe trachea so that all air passes through the tube is alifesaving procedure Secondly the need for ETI canoccur in many places in and out of the hospital Per-haps the most important reason for training clinicians inETI however is the inherent difficulty associated withthe procedure (American Heart Association 1992Walls Barton amp McAfee 1999)

Current teaching methods lack flexibility in more

than one sense The most widely used model is a plasticor latex mannequin commonly used to teach advancedcardiac life support (ACLS) techniques including air-way management The neck and oropharynx are usuallydifficult to manipulate without inadvertently ldquobreakingrdquothe modelrsquos teeth or ldquodislocatingrdquo the cervical spinebecause of the awkward hand motions required A rela-tively recent development is the HPS a mannequin-based simulator The HPS is similar to the existingACLS models but the neck and airway are often moreflexible and lifelike and can be made to deform and re-lax to simulate real scenarios The HPS can simulateheart and lung sounds and provide palpable pulses aswell as realistic chest movement The simulator is inter-active but requires real-time programming and feed-back from an instructor (Murray amp Schneider 1997)Utilizing a HPS combined with 3D AR visualization ofthe airway anatomy and the endotracheal tube para-medics will be able to obtain a visual and tactile sense ofproper ETI The UIH will allow paramedics to practicetheir skills and provide them with the visual feedbackthey could not obtain otherwise

Intubation on the HPS is shown in Figure 11a The

Figure 11 (a) A user training on an intubation procedure (b) Lung model superimposed on the HPS using AR (3D model of the Visible

Human Dataset lung Courtesy of Celina Imielinska Columbia University)

542 PRESENCE VOLUME 14 NUMBER 5

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 13: Development of Head-Mounted - University of Central Florida

Figure 7 with a Fresnel lens for low weight and com-pactness

Because the image formed by the projection optics isminimized at the location of the retro-reflective materialplaced within the M-HMPD a difference with the basicHMPD is that smaller microstructures are required (ieon the order of 10 m instead of 100 m) The de-tailed optical design of the lens is similar to that of thefirst ODA Lab HMPD optics except that the microdis-play chosen is a 06 in diagonal OLED and higher res-olution 800 600 pixels Details of the optical designwere reported elsewhere (Martins Ha amp Rolland2003) A challenge associated with the development ofthe M-HMPD is the design and fabrication of about10 m scale microstructure retro-reflective materialSuch microstructures are not commercially available andcustom-design materials are being investigated

Because of its stand-alone capability the M-HMPDextends the use of HMPDs to clinical guided surgerymedical simulation and training wearable computersmobile secure displays and distributed collaborative dis-plays as well as outdoor augmented see-through virtualenvironments The visual results of the M-HMPD com-pare in essence to that of eyepiece-based see-throughAR systems currently available The difference lieshowever in the higher image quality (ie lower blur) adistortion-free image and a more compact optics

5 A Review of CollaborativeApplications Medical Applicationsand Infospaces

The HMPD facilitates the development of collab-orative environments that allow seamless transitionsthrough different levels of immersion from AR to a fullvirtual reality experience (Milgram amp Kishino 1994Davis et al 2003) In the artificial reality center (ARC)users can navigate between various levels of immersionthat occur on the basis of where users position them-selves with respect to the retro-reflective material TheARC presented at ISMAR 2002 (Hamza-Lup et al2002) together with a remote collaboration applicationbuilt on top of DARE (distributed augmented realityenvironment) (Hamza-Lup Davis Hughes amp Rolland2002) consists of a generally curved or shaped retro-reflective wall an HMPD with miniature optics a com-mercially available optical tracking system and Linux-based PCs The ARCs may take different shapes andsizes and are importantly quickly deployable either in-doors or outdoors Since the year 2001 we built twodeployable (10 minute setup) displays 10-ft wide by7-ft high as well as a deployable (30 minute setup)15-ft diameter center networked to some of the otherARCs as shown in Figure 8 However all displays aim toprovide multiuser capability including remotely located

Figure 8 Networked open environments with artificial reality centers (NOEs ARCs)

540 PRESENCE VOLUME 14 NUMBER 5

users (Hamza-Lup amp Rolland 2004b) as well as 3Dmultisensory visualization including 3D sound hapticand smell (Rolland Stanney et al 2004) The userrsquosmotion becomes the computer interface device in con-trast to systems that may resort to various display de-vices (Billinghurst Kato amp Poupyrev 2001)

Variants of the HMPD optics targeted at specificapplications such as 3D medical visualization and dis-tributed AR medical training tools (Rolland Hamza-Lup et al 2003 Hamza-Lup Rolland amp Hughes2005) embedded training display technology for theArmyrsquos future combat vehicles (Rodrigez Foglia ampRolland 2003) 3D manufacturing design collabora-tion (Rolland Biocca et al 2004) have been devel-oped in our laboratory Also the HMPD in its origi-nal prototype form with a 52deg ultralightweight opticslies at the core of the Aztec Explorer application de-veloped at the 3DVIS Lab that investigates variousinteraction schemes within SCAPE (stereoscopic col-laboration in augmented and projective environ-ments) that consists of an integration of the HMPDtogether with the HiBall3000 head tracker by 3rdTech (www3rdtechcom) a SCAPE API developedin the 3DVIS Lab the CAVERN G2 API networkinglibrary (wwwopenchanelsoftwareorg) and a tracked5DT Dataglove (Hua Brown amp Gao 2004)

In summary the HMPD technology we have devel-oped currently provides a fine balance of affordabilityand unique capabilities such as (1) spanning the virtual-ity continuum allowing both full immersion and mixedreality which may open a set of new capabilities acrossvarious applications (2) enabling teleportal capabilitywith face-to-face interaction (3) creating ultralight-weight wide FOVs mobile and secure displays (4) creat-ing small low cost desktop technology or on largerscale quickly deployable 3D visualization centers or dis-plays

51 Medical Visualization andAugmented Reality Medical TrainingTools

In the ARCs multiple users may interact on thevisualization of 3D medical models as shown in Figure9 or practice procedures on AR human patient simula-tors (HPS) with a teaching module referred to as theultimate intubation head (UIH) as shown in Figure 10which we shall now further detail Because as detailedearlier in the paper the light projected from one userreturns only to that user there is no cross talk betweenusers and thus multiple users can coexist with no over-lapping graphics in the ARCs

Figure 9 Concept of multiusers interacting in the ARC Figure 10 Illustration of the endotracheal intubation training tool

Rolland et al 541

The UIH is a training tool in development in theODA Lab for endotracheal intubation (ETI) based onHMPD and ARC technology (Rolland Hamza-Lup etal 2003) (Hamza-Lup Rolland amp Hughes 2005) Itis aimed at medical students residents physician assis-tants pre-hospital care personnel nurse-anesthetistsexperienced physicians and any medical personnel whoneed to perform this common but critical procedure ina safe and rapid sequence

The system trains a wide range of clinicians in safelysecuring the airway during cardiopulmonary resuscita-tion as ensuring immediate ventilation andor oxygen-ation is critical for a number of reasons Firstly ETIwhich consists of inserting an endotracheal tubethrough the mouth into the trachea and then sealingthe trachea so that all air passes through the tube is alifesaving procedure Secondly the need for ETI canoccur in many places in and out of the hospital Per-haps the most important reason for training clinicians inETI however is the inherent difficulty associated withthe procedure (American Heart Association 1992Walls Barton amp McAfee 1999)

Current teaching methods lack flexibility in more

than one sense The most widely used model is a plasticor latex mannequin commonly used to teach advancedcardiac life support (ACLS) techniques including air-way management The neck and oropharynx are usuallydifficult to manipulate without inadvertently ldquobreakingrdquothe modelrsquos teeth or ldquodislocatingrdquo the cervical spinebecause of the awkward hand motions required A rela-tively recent development is the HPS a mannequin-based simulator The HPS is similar to the existingACLS models but the neck and airway are often moreflexible and lifelike and can be made to deform and re-lax to simulate real scenarios The HPS can simulateheart and lung sounds and provide palpable pulses aswell as realistic chest movement The simulator is inter-active but requires real-time programming and feed-back from an instructor (Murray amp Schneider 1997)Utilizing a HPS combined with 3D AR visualization ofthe airway anatomy and the endotracheal tube para-medics will be able to obtain a visual and tactile sense ofproper ETI The UIH will allow paramedics to practicetheir skills and provide them with the visual feedbackthey could not obtain otherwise

Intubation on the HPS is shown in Figure 11a The

Figure 11 (a) A user training on an intubation procedure (b) Lung model superimposed on the HPS using AR (3D model of the Visible

Human Dataset lung Courtesy of Celina Imielinska Columbia University)

542 PRESENCE VOLUME 14 NUMBER 5

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 14: Development of Head-Mounted - University of Central Florida

users (Hamza-Lup amp Rolland 2004b) as well as 3Dmultisensory visualization including 3D sound hapticand smell (Rolland Stanney et al 2004) The userrsquosmotion becomes the computer interface device in con-trast to systems that may resort to various display de-vices (Billinghurst Kato amp Poupyrev 2001)

Variants of the HMPD optics targeted at specificapplications such as 3D medical visualization and dis-tributed AR medical training tools (Rolland Hamza-Lup et al 2003 Hamza-Lup Rolland amp Hughes2005) embedded training display technology for theArmyrsquos future combat vehicles (Rodrigez Foglia ampRolland 2003) 3D manufacturing design collabora-tion (Rolland Biocca et al 2004) have been devel-oped in our laboratory Also the HMPD in its origi-nal prototype form with a 52deg ultralightweight opticslies at the core of the Aztec Explorer application de-veloped at the 3DVIS Lab that investigates variousinteraction schemes within SCAPE (stereoscopic col-laboration in augmented and projective environ-ments) that consists of an integration of the HMPDtogether with the HiBall3000 head tracker by 3rdTech (www3rdtechcom) a SCAPE API developedin the 3DVIS Lab the CAVERN G2 API networkinglibrary (wwwopenchanelsoftwareorg) and a tracked5DT Dataglove (Hua Brown amp Gao 2004)

In summary the HMPD technology we have devel-oped currently provides a fine balance of affordabilityand unique capabilities such as (1) spanning the virtual-ity continuum allowing both full immersion and mixedreality which may open a set of new capabilities acrossvarious applications (2) enabling teleportal capabilitywith face-to-face interaction (3) creating ultralight-weight wide FOVs mobile and secure displays (4) creat-ing small low cost desktop technology or on largerscale quickly deployable 3D visualization centers or dis-plays

51 Medical Visualization andAugmented Reality Medical TrainingTools

In the ARCs multiple users may interact on thevisualization of 3D medical models as shown in Figure9 or practice procedures on AR human patient simula-tors (HPS) with a teaching module referred to as theultimate intubation head (UIH) as shown in Figure 10which we shall now further detail Because as detailedearlier in the paper the light projected from one userreturns only to that user there is no cross talk betweenusers and thus multiple users can coexist with no over-lapping graphics in the ARCs

Figure 9 Concept of multiusers interacting in the ARC Figure 10 Illustration of the endotracheal intubation training tool

Rolland et al 541

The UIH is a training tool in development in theODA Lab for endotracheal intubation (ETI) based onHMPD and ARC technology (Rolland Hamza-Lup etal 2003) (Hamza-Lup Rolland amp Hughes 2005) Itis aimed at medical students residents physician assis-tants pre-hospital care personnel nurse-anesthetistsexperienced physicians and any medical personnel whoneed to perform this common but critical procedure ina safe and rapid sequence

The system trains a wide range of clinicians in safelysecuring the airway during cardiopulmonary resuscita-tion as ensuring immediate ventilation andor oxygen-ation is critical for a number of reasons Firstly ETIwhich consists of inserting an endotracheal tubethrough the mouth into the trachea and then sealingthe trachea so that all air passes through the tube is alifesaving procedure Secondly the need for ETI canoccur in many places in and out of the hospital Per-haps the most important reason for training clinicians inETI however is the inherent difficulty associated withthe procedure (American Heart Association 1992Walls Barton amp McAfee 1999)

Current teaching methods lack flexibility in more

than one sense The most widely used model is a plasticor latex mannequin commonly used to teach advancedcardiac life support (ACLS) techniques including air-way management The neck and oropharynx are usuallydifficult to manipulate without inadvertently ldquobreakingrdquothe modelrsquos teeth or ldquodislocatingrdquo the cervical spinebecause of the awkward hand motions required A rela-tively recent development is the HPS a mannequin-based simulator The HPS is similar to the existingACLS models but the neck and airway are often moreflexible and lifelike and can be made to deform and re-lax to simulate real scenarios The HPS can simulateheart and lung sounds and provide palpable pulses aswell as realistic chest movement The simulator is inter-active but requires real-time programming and feed-back from an instructor (Murray amp Schneider 1997)Utilizing a HPS combined with 3D AR visualization ofthe airway anatomy and the endotracheal tube para-medics will be able to obtain a visual and tactile sense ofproper ETI The UIH will allow paramedics to practicetheir skills and provide them with the visual feedbackthey could not obtain otherwise

Intubation on the HPS is shown in Figure 11a The

Figure 11 (a) A user training on an intubation procedure (b) Lung model superimposed on the HPS using AR (3D model of the Visible

Human Dataset lung Courtesy of Celina Imielinska Columbia University)

542 PRESENCE VOLUME 14 NUMBER 5

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 15: Development of Head-Mounted - University of Central Florida

The UIH is a training tool in development in theODA Lab for endotracheal intubation (ETI) based onHMPD and ARC technology (Rolland Hamza-Lup etal 2003) (Hamza-Lup Rolland amp Hughes 2005) Itis aimed at medical students residents physician assis-tants pre-hospital care personnel nurse-anesthetistsexperienced physicians and any medical personnel whoneed to perform this common but critical procedure ina safe and rapid sequence

The system trains a wide range of clinicians in safelysecuring the airway during cardiopulmonary resuscita-tion as ensuring immediate ventilation andor oxygen-ation is critical for a number of reasons Firstly ETIwhich consists of inserting an endotracheal tubethrough the mouth into the trachea and then sealingthe trachea so that all air passes through the tube is alifesaving procedure Secondly the need for ETI canoccur in many places in and out of the hospital Per-haps the most important reason for training clinicians inETI however is the inherent difficulty associated withthe procedure (American Heart Association 1992Walls Barton amp McAfee 1999)

Current teaching methods lack flexibility in more

than one sense The most widely used model is a plasticor latex mannequin commonly used to teach advancedcardiac life support (ACLS) techniques including air-way management The neck and oropharynx are usuallydifficult to manipulate without inadvertently ldquobreakingrdquothe modelrsquos teeth or ldquodislocatingrdquo the cervical spinebecause of the awkward hand motions required A rela-tively recent development is the HPS a mannequin-based simulator The HPS is similar to the existingACLS models but the neck and airway are often moreflexible and lifelike and can be made to deform and re-lax to simulate real scenarios The HPS can simulateheart and lung sounds and provide palpable pulses aswell as realistic chest movement The simulator is inter-active but requires real-time programming and feed-back from an instructor (Murray amp Schneider 1997)Utilizing a HPS combined with 3D AR visualization ofthe airway anatomy and the endotracheal tube para-medics will be able to obtain a visual and tactile sense ofproper ETI The UIH will allow paramedics to practicetheir skills and provide them with the visual feedbackthey could not obtain otherwise

Intubation on the HPS is shown in Figure 11a The

Figure 11 (a) A user training on an intubation procedure (b) Lung model superimposed on the HPS using AR (3D model of the Visible

Human Dataset lung Courtesy of Celina Imielinska Columbia University)

542 PRESENCE VOLUME 14 NUMBER 5

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 16: Development of Head-Mounted - University of Central Florida

location of the HPS the trainee and the endotrachealtube are tracked during the visualization The AR sys-tem integrates an HMPD and an optical tracker with aLinux-based PC to visualize internal airway anatomyoptically superimposed on a HPS as shown in Figure11b In this implementation the HPS wears a custom-made T-shirt made of retro-reflective material With theexception of the HMPD the airway visualization is real-ized using commercially available hardware compo-nents The computer used for stereoscopic renderinghas a Pentium 4 CPU running a Linux-based OS withGeForce 4 GPU The locations of the user the HPSand the endotracheal tube are tracked using a Polarishybrid optical tracker from Northern Digital Inc andcustom designed probes (Davis Hamza-Lup amp Rol-land 2004)2

In an effort to develop the UIH system we had toacquire high quality textured models of anatomy(eg models from the Visible Human Dataset) de-velop methods for scaling these models to the HPSas well as methods for registration of virtual modelswith real landmarks on the HPS Furthermore we areworking towards interfacing the breathing HPS witha physiologically-based 3D real-time model of breath-ing lungs (Santhanam Fidopiastis Hamza-Lup ampRolland 2004) In the process of developing suchmethods we are using the ARCs for the developmentand visualization of the models Based on recent al-gorithms for dynamic shared state maintenance acrossdistributed 3D environments (Hamza-Lup amp Rolland2004b) we plan before the end of year 2005 to be ableto share the development of these models in 3D and inreal time with Columbia University Medical Schoolwhich has been working with us as part of a relatedproject on decimating the models for real-time 3D visu-alization and will further be working with us on thetesting of bringing these models alive (eg a deform-able breathing 3D model of the lungs)

52 From Hardware to InterfacesMobile Infospaces Model for AR MenuTool Object and Data Layouts

The development of HMPDs and mobile AR sys-tems allows users to walk around and interact with 3Dobjects to be fully immersed in virtual environmentsbut also remain able to see and interact with other userslocated nearby 3D objects and 2D information overlayssuch as labels can be tightly integrated with the physicalspace the room and the body of the user In AR envi-ronments space is the interface AR virtual spaces canmake real world objects and environments rich withldquoknowledgerdquo and guidance in the form of overlaid andeasily accessible virtual information (eg labels of ldquoseethroughrdquo diagrams) and capabilities

A research project called mobile infospaces at theMIND Lab seeks general principles and guidelinesfor AR information systems For example how shouldAR menus and information overlays be laid out and or-ganized in AR environments The project focuses onwhat is novel about mobile AR systems how shouldmenus and data be organized around the body of a fullymobile user accessing high volumes of information

Before optimizing AR we felt it was important to startby asking a fundamental question Can AR environ-ments improve individual and team performance in nav-igation search and object manipulation tasks whencompared to other media To answer this question weconducted an experiment to test whether spatially regis-tered AR diagrams and instructions could improve hu-man performance in an object assembly task We com-pared spatial registered AR interfaces to three othermedia interfaces that presented the exact same 3Dassembly information including computer aided in-struction (standard screen) a printed manual andnon-spatially registered AR (ie 3D instructions on asee-through HMD) Compared to other interfaces spa-tially registered AR was dramatically superior reducingerror rates by as much as 82 reducing the userrsquos senseof cognitive load between 5ndash25 and speeding thetime to complete the assembly task by 5ndash26 (TangOwen Biocca amp Mou 2003) These improvements in2 Polaris is a registered endmark of Northern Digital Inc

Rolland et al 543

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 17: Development of Head-Mounted - University of Central Florida

performance will vary with tasks and environments butthe controlled comparison suggests that the spatial reg-istration of information provided by AR can significantlyaffect user performance

If spatially registered AR systems can help improvehuman performance in some applications then how candesigners of AR interfaces optimize the placement andorganization of virtual information overlays Unlike theclassic windows desktop interface systematic principlesand guidelines for the full range of menu tool and dataobject design aimed at mobile and team based AR sys-tems are not well established [See for example the use-ful but incomplete guidelines by Gabbard and Hix(2001)]

Our mobile infospaces use a model of human spatialcognition to derive and test principles and associatedguidelines for organizing and displaying AR menus ob-ject clusters and tools The model builds upon neuro-psychological and behavioral research on how the brainkeeps segments organizes and tracks the location ofobjects and agents around the body (egocentric space)and the environment (exocentric space) as a user inter-acts with the environment (Bryant 1992 Cutting ampVishton 1995 Grusser 1983 Previc 1998 RizzolattiGentilucci amp Matelli 1985) The mobile infospacesmodel seeks to map some key cognitive properties ofthe space around the body of the user to provide guide-lines for the creation of AR menus and informationtools

Using AR can help users keep track of high volumesof virtual objects such as tools and data objects attachedlike an egocentric framework (ie field) around thebody People have a tremendous ability to easily keeptrack of objects in the environment known as spatialupdating In some experiments exploring whether thiscapability could be leveraged for AR menus and datawe explored whether new users could adapt their auto-matic ability to update the location of objects in a fieldaround the body Could they quickly learn and remem-ber the location of a field of objects organized aroundtheir moving body that is virtual objects floating inspace but affixed as tools to the central axis of the bodyEven though such a task was new and somewhat unnat-

ural users were able to update the location of virtualobjects attached to a framework around the body withas little as 30 seconds of experience or by simply beingtold that the objects would move (Mou Biocca Owenet al 2004) This finding suggests that users of AR sys-tems might be able to quickly keep track of and accessmany tools and objects floating in space around theirbody even though fields of floating objects attached tothe body is a novel model and not experienced in thenatural world because of the simple laws of gravity

If users can make use of tools and menus freely mov-ing around the body then are there sweet spot locationsaround the body that may have different cognitive prop-erties Some areas are clearly more attention gettingbut they may also have slightly different ergonomicsdifferent memory properties or even slightly differentmeaning (semantic properties) Basic psychological re-search indicates that locations in physical and AR spaceare by no means psychologically equal the psychologi-cal properties of space around the body and the envi-ronment are highly asymmetrical (Mou Biocca Tangamp Owen 2005) Perception attention meaning andmemory for objects can vary with their locations andorganization in egocentric and exocentric space

In a program to map some of the static and dynamicproperties of the spatial location of virtual objectsaround a moving body we conducted a study of theergonomics of the layout of objects and menus Theexperiment found that the speed for which a user wear-ing an HMD can find and place a virtual tool in thespace around the body can vary by as much as 300(Biocca Eastin amp Daugherty 2001) This finding hasimplications on where to place frequently used tools andobjects Locations in space especially those around thebody can also have different psychological propertiesFor example a virtual object especially agents (ie vir-tual faces) may be perceived with slightly differentshades of meaning (ie connotations) as their locationaround the body varies (Biocca David Tang amp Lim2004) Because the meaning of faces varied stronglythere are implications for where in the space around thebody designers might place video-conference windowsor artificial agents

544 PRESENCE VOLUME 14 NUMBER 5

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 18: Development of Head-Mounted - University of Central Florida

Current research on the mobile infospaces project isexpanding to produce (1) a map of psychologically rel-evant spatial frames (2) a set of guidelines based onexisting research and practices and (3) an experiment toexplore how spatial organization of information canaugment or support human cognition

6 Summary of Current Properties ofHMPDs and Issues Related to FutureWork

Integrating immersive physical and virtual spacesMany of the fundamental issues in the design of collab-orative environments deal with the representation anduse of space HMPDs can combine the wide-screen im-mersive experience of CAVE with the hands on close tothe body immediacy of see-through head-worn AR sys-tems The ARCs constitute an example of the use of thetechnology in wide-screen immersive environmentswhich are quickly deployable indoors or outdoors Vari-ous applications in collaborative visualization may re-quire simultaneous display of large immersive spacessuch as rooms and landscapes with more detailed hand-held spaces such as models and tools

With HMPDs each person has a unique perspectiveon both physical and virtual space Because the informa-tion comes from each userrsquos display information seenby each user such as labels can be unique to that userallowing for customization and private informationwithin a shared environment The future developmentof HMPDs shares some common issues with otherHMDs such as resolution luminosity FOV comfortand also with other AR systems such as registration ac-curacy Beyond these issues such as those addressed byour mobile infospaces program seek to model the spaceafforded by HMPDs as an integrated information envi-ronment making full use of the HMPDs ability to inte-grate information and the faces of collaborators aroundthe body and the environment

AR and collaborative support Like CAVE technologyor other display systems the HMPD approach supportsusers experiencing among other wide-screen immersive

3D visualizations and environments But unlike CAVEthe projection display is head-worn providing each userwith a correct perspectives viewpoint on the virtualscene essential to the display of objects that are to behand manipulated close to the body Continued devel-opment of these features needs to consider as withother AR systems continued registration of virtual andphysical objects especially in the context of multiple us-ers The mobile infospaces research program exploresguidelines for integrating and organizing physical andvirtual tools

Immersive-AR properties The properties of theHMPD with retro-reflective surfaces support ARproperties of any object that incorporates some retro-reflective material For example this property can beused in immersive applications to allow projection walk-around displays such as the visualization of a full bodyin a display tube-screen or on handheld objects such astablets Unlike see-through optical displays the AR ob-jects produced via a HMPD have some of the propertiesof visual occlusion For example physical objects ap-pearing in front of the virtual object will occlude it asshown in Figure 3 Because the virtual images are at-tached to physical objects with retro-reflective surfacesusers view the space immediately around their body butthey can also move around pick up and interact withphysical objects on which virtual information such aslabeling color and other virtual properties can be an-notated

Designing spaces for multiple collaborative users Col-laborative spaces need to support spatial interactionamong active users The design of T-HMPDs seeks tominimize obscuring the face of the user to other localusers in the physical space a problem in VR HMDswhile still immersing the visual system in a uniqueperspective accurate immersive AR experience TheT-HMPD attempts to integrate some of the social cuesfrom two distributed collaborative spaces into one com-mon space Much research is underway to develop theT-HMPD to achieve real-time distributed face-to-faceinteractions a promise of the technology and testing itsbenefit compared to 2D video streaming

Mobile spaces While the HMPD originally was pro-

Rolland et al 545

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 19: Development of Head-Mounted - University of Central Florida

posed to capitalize on retro-reflective material strategi-cally positioned in the environment we have extendedthe display concept to provide a fully mobile display theM-HMPD This display still uses projection optics buttogether with retro-reflective material fully embeddedinto the HMPD opens a breadth of new applications innatural environments Future work includes fabricatingthe custom-designed micro retroreflective materialneeded for the M-HMPD and expanding on the FOVfor various applications

7 Conclusion

In this paper we have reviewed a novel emergingtechnology the HMPD based on custom designedminiature projectors mounted on the head and coupledwith retro-reflective material We have demonstratedthat such material may be positioned either in the envi-ronment at strategic locations or within the HMPD it-self leading to full mobility with M-HMPDs With thedevelopment of HMPDs spaces such as the quickly de-ployable augmented reality centers (ARCs) and mobileAR systems allow users to interact with 3D objects andother agents located around a user or remotely Yet an-other evolution of the HMPD the teleportal T-HMPDseeks to minimize obscuring the face of the user toother local users in the physical space in order to sup-port spatial interaction among active users while alsoproviding remote users a potential face-to-face collabo-ration with remote participants Projection technologiescreate virtual spaces within physical spaces In someways we can see the approach to the design of HMPDsand related technologies as a program to integrate vari-ous issues in representation of AR spaces and integratedistributed spaces into one fully embodied shared col-laborative space

Acknowledgments

We thank NVIS Inc for providing the opto-mechanical de-sign of the HMPD shown in Figure 1c and Figure 10 We

thank 3M Corporation for the generous donation of thecubed retro-reflective material We also thank Celina Imi-elinska from Columbia University for providing the modelsfrom the Visible Human Dataset displayed in this paper Wethank METI Inc for providing the HPS and stimulatingdiscussions about medical training tools This research startedwith seed support from the French ELF Production Corpora-tion and the MIND Lab at Michigan State University fol-lowed by grants from the National Science Foundation IIS00-82016 ITR EIA-99-86051 IIS 03-07189 HCI IIS 02-22831 HCI the US Army STRICOM the Office of NavalResearch contracts N00014-02-1-0261 N00014-02-1-0927and N00014-03-1-0677 the Florida Photonics Center of Ex-cellence METI Corporation Adastra Labs LLC and theLINK and MSU Foundations

References

American Heart Association (1992) Guidelines for cardiopul-monary resuscitation and emergency cardiac caremdashPart IIAdult Basic Life Support Journal of the American MedicalAssociation 268 2184ndash2198

Billinghurst M Kato H amp Poupyrev I (2001) Themagicbook Moving seamlessly between reality and virtual-ity IEEE Computer Graphics and Applications (CGA)21(3) 6ndash8

Biocca F amp Rolland JP (2004 August) US Patent No6774869 Washington DC US Patent and TrademarkOffice

Biocca F David P Tang A amp Lim L (2004 May) Doesvirtual space come precoded with meaning Location aroundthe body in virtual space affects the meaning of objects andagents Paper presented at the 54th Annual Conference ofthe International Communication Association New Or-leans LA

Biocca F Eastin M amp Daugherty T (2001) Manipulat-ing objects in the virtual space around the body Relationshipbetween spatial location ease of manipulation spatial recalland spatial ability Paper presented at the InternationalCommunication Association Washington DC

Biocca F Harms C amp Burgoon J (2003) Towards amore robust theory and measure of social presence Reviewand suggested criteria Presence Teleoperators and VirtualEnvironments 12(5) 456ndash480

546 PRESENCE VOLUME 14 NUMBER 5

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 20: Development of Head-Mounted - University of Central Florida

Bryant D J (1992) A spatial representation system in hu-mans Psycholoquy 3(16) 1

Cruz-Neira C D Sandin J amp DeFanti T A (1993 Au-gust) Surround-screen projection-based virtual reality Thedesign and implementation of the CAVE Proceedings ofACM SIGGRAPH 1993 (pp 135ndash142) Anaheim CA

Cutting J E amp Vishton P M (1995) Perceiving layoutand knowing distances The integration relative potencyand contextual use of different information about depth InW Epstein amp S Rogers (Eds) Perception of space and mo-tion (pp 69ndash117) San Diego CA Academic Press

Davis L Hamza-Lup F amp Rolland J P (2004) A methodfor designing marker-based tracking probes InternationalSymposium on Mixed and Augmented Reality ISMAR rsquo04(pp 120ndash129) Washington DC

Davis L Rolland J P Hamza-Lup F Ha Y Norfleet JPettitt B et al (2003) Enabling a continuum of virtualenvironment experiences IEEE Computer Graphics and Ap-plications (CGA) 23(2) 10ndash12

Fidopiastis C M Furhman C Meyer C amp Rolland J P(2005) Methodology for the iterative evaluation of proto-type head-mounted displays in virtual environments Visualacuity metrics Presence Teleoperators and Virtual Environ-ments 14(5) 550ndash562

Fergason J (1997) US Patent No 5621572 WashingtonDC US Patent and Trademark Office

Fisher R (1996) US Patent No 5572229 Washington DCUS Patent and Trademark Office

Gabbard J L amp Hix D (2001) Researching usability de-sign and evaluation guidelines for Augmented Reality Sys-tems Retrieved May 1 2004 from wwwsvvteduclassesESM4714Student_Projclass00gabbard

Girardot A amp Rolland J P (1999) Assembly and investiga-tion of a projective head-mounted display (Technical ReportTR-99-003) Orlando FL University of Central Florida

Grusser O J (1983) Multimodal structure of the extraper-sonal space In A Hein amp M Jeannerod (Eds) Spatiallyoriented behavior (pp 327ndash352) New York Springer

Ha Y amp Rolland J P (2004 October) US Patent No6804066 B1 Washington DC US Patent and TrademarkOffice

Hamza-Lup F Davis L Hughes C amp Rolland J (2002)Where digital meets physicalmdashDistributed augmented real-ity environments ACM Crossroads 2002 9(3)

Hamza-Lup F G Davis L amp Rolland J P (2002 Sep-tember) The ARC display An augmented reality visualiza-

tion center International Symposium on Mixed and Aug-mented Reality (ISMAR) Darmstadt Germany RetrievedMay 17 2005 from httpstudierstubeorgismar2002demosismar_rollandpdf

Hamza-Lup F G amp Rolland J P (2004a March) Adap-tive scene synchronization for virtual and mixed reality envi-ronments Proceedings of IEEE Virtual Reality 2004 (pp99ndash106) Chicago IL

Hamza-Lup F G amp Rolland J P (2004b) Scene synchro-nization for real-time interaction in distributed mixed realityand virtual reality environments Presence Teleoperators andVirtual Environments 13(3) 315ndash327

Hamza-Lup F G Rolland J P amp Hughes C E (2005) Adistributed augmented reality system for medical trainingand simulation In B J Brian (Ed) Energy simulation-training ocean engineering and instrumentation Researchpapers of the link foundation fellows (Vol 4 pp 213ndash235)Rochester NY Rochester Press

Hua H Brown L D amp Gao C (2004) System and inter-face framework for SCAPE as a collaborative infrastructurePresence Teleoperators and Virtual Environments 13(2)234ndash250

Hua H Girardot A Gao C amp Rolland J P (2000) En-gineering of head-mounted projective displays Applied Op-tics 39(22) 3814ndash3824

Hua H Ha Y amp Rolland J P (2003) Design of an ultra-light and compact projection lens Applied Optics 42(1)97ndash107

Hua H amp Rolland J P (2004) US Patent No 6731434B1 Washington DC US Patent and Trademark Office

Huang Y Ko F Shieh H Chen J amp Wu S T (2002)Multidirectional asymmetrical microlens array light controlfilms for high performance reflective liquid crystal displaysSID Digest 869ndash873

Inami M Kawakami N Sekiguchi D Yanagida YMaeda T amp Tachi S (2000) Visuo-haptic display usinghead-mounted projector Proceedings of IEEE Virtual Real-ity 2000 (pp 233ndash240) Los Alamitos CA

Kawakami N Inami M Sekiguchi D Yangagida YMaeda T amp Tachi S (1999) Object-oriented displays Anew type of display systemsmdashFrom immersive display toobject-oriented displays IEEE International Conference onSystems Man and Cybernetics (Vol 5 pp 1066ndash1069)Piscataway NJ

Kijima R amp Ojika T (1997) Transition between virtual en-vironment and workstation environment with projective

Rolland et al 547

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 21: Development of Head-Mounted - University of Central Florida

head-mounted display Proceedings of IEEE 1997 VirtualReality Annual International Symposium (pp 130ndash137)Los Alamitos CA

Krueger M (1977) Responsive environments Proceedings of theNational Computer Conference (pp 423ndash433) Montvale NJ

Krueger M (1985) VIDEOPLACEmdashAn artificial realityProceedings of ACM SIGCHIrsquo85 (pp 34ndash40) San Fran-cisco CA

Martins R Ha Y amp Rolland J P (2003) Ultra-compactlens assembly for a head-mounted projector UCF Patentfiled University of Central Florida

Martins R amp Rolland J P (2003) Diffraction properties ofphase conjugate material In C E Rash and E R Colin(Eds) Proceedings of the SPIE Aerosense Helmet- and Head-Mounted Displays VIII Technologies and Applications 5079277ndash283

Milgram P amp Kishino F (1994) A taxonomy of mixed real-ity visual displays IECE Trans Information and Systems(Special Issue on Networked Reality) E77-D(12) 1321ndash1329

Mou W Biocca F Owen C B Tang A Xiao F amp LimL (2004) Frames of reference in mobile augmented realitydisplays Journal of Experimental Psychology Applied 10(4)238ndash244

Mou W Biocca F Tang A amp Owen C B (2005) Spati-alized interface design of mobile augmented reality systemsInternational Journal of Human Computer Studies

Murray W B amp Schneider A J L (1997) Using simulatorsfor education and training in anesthesiology ASA Newsletter6(10) Retrieved May 1 2003 from httpwwwasahqorgNewsletters199710_97Simulators_1097html

Parsons J amp Rolland J P (1998) A non-intrusive displaytechnique for providing real-time data within a surgeonrsquoscritical area of interest Proceedings of Medicine Meets Vir-tual Reality 98 (pp 246ndash251) Newport Beach CA

Poizat D amp Rolland J P (1998) Use of retro-reflective sheetsin optical system design (Technical Report TR98-006) Or-lando FL University of Central Florida

Previc F H (1998) The neuropsychology of 3D space Psy-chological Bulletin 124(2) 123ndash164

Reddy C (2003) A non-obtrusive head mounted face cap-ture system Unpublished Masterrsquos Thesis Michigan StateUniversity

Reddy C Stockman G C Rolland J P amp Biocca F A(2004) Mobile face capture for virtual face videos InCVPRrsquo04 The First IEEE Workshop on Face Processing

in Video Retrieved July 7 2004 from httpwwwvisioninterfacenetfpiv04papershtml

Rizzolatti G Gentilucci M amp Matelli M (1985) Selectivespatial attention One center one circuit or many circuitsIn M I Posner amp O S M Marin (Eds) Attention andperformance (pp 251ndash265) Hillsdale NJ Erlbaum

Rodriguez A Foglia M amp Rolland J P (2003) Embed-ded training display technology for the armyrsquos future com-bat vehicles Proceedings of the 24th Army Science Conference(pp 1ndash6) Orlando FL

Rolland J P Biocca F Hua H Ha Y Gao C amp Har-rysson O (2004) Teleportal augmented reality systemIntegrating virtual objects remote collaborators and physi-cal reality for distributed networked manufacturing In KOng amp AYC Nee (Eds) Virtual and Augmented RealityApplications in Manufacturing (Ch 11 pp 179ndash200)London Springer-Verlag

Rolland J P amp Fuchs H (2001) Optical versus video see-through head-mounted displays In W Barfield amp TCaudell (Eds) Fundamentals of Wearable Computers andAugmented Reality (pp 113ndash156) Mahwah NJ Erlbaum

Rolland J P Ha Y amp Fidopiastis C (2004) Albertian er-rors in head-mounted displays Choice of eyepoints locationfor a near or far field tasks visualization JOSA A 21(6)901ndash912

Rolland J P Hamza-Lup F Davis L Daly J Ha YMartin G et al (2003 January) Development of a train-ing tool for endotracheal intubation Distributed aug-mented reality Proceedings of Medicine Meets Virtual Real-ity (Vol 11 pp 288ndash294) Newport Beach California

Rolland J P Martins R amp Ha Y (2003) Head-mounteddisplay by integration of phase-conjugate material UCFPatent Filed University of Central Florida

Rolland J P Parsons J Poizat D amp Hancock D (1998July) Conformal optics for 3D visualization Proceedings ofthe International Optical Design Conference 1998 SPIE3482 (pp 760ndash764) Kona HI

Rolland J P Stanney K Goldiez B Daly J Martin GMoshell M et al (2004 July) Overview of research inaugmented and virtual environments RAVES Proceedingsof the International Conference on Cybernetics and Informa-tion Technologies Systems and Applications (CITSArsquo04) 419ndash24

Santhanam A Fidopiastis C Hamza-Lup F amp Rolland J P(2004) Physically-based deformation of high-resolution 3Dmodels for augmented reality based medical visualization

548 PRESENCE VOLUME 14 NUMBER 5

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549

Page 22: Development of Head-Mounted - University of Central Florida

Proceedings of MICCAI-AMI-ARCS International Work-shop on Augmented Environments for Medical Imagingand Computer-Aided Surgery 2004 (pp 21ndash32) RennesSt Malo France

Short J Williams E and Christie B (1976) Visual com-munication and social interaction In R M Baecker (Ed)Readings in groupware and computer-supported cooperativework assisting human-human collaboration (pp 153ndash164)San Mateo CA Morgan Kaufmann

Sutherland I E (1968) A head mounted three dimensional

display Proceedings of the Fall Joint Computer ConferenceAFIPS Conference 33 757ndash764

Tang A Owen C Biocca F amp Mou W (2003) Compar-ative effectiveness of augmented reality in object assemblyProceedings of the Conference on Human Factors in Comput-ing Systems ACM CHI (pp 73ndash80) Ft Lauderdale FL

Walls R M Barton E D amp McAfee A T (1999) 2392emergency department intubations First report of the on-going national emergency airway registry study Annals ofEmergency Medicine 26 364ndash403

Rolland et al 549