vr planet: interface for meta-view and feet interaction of vr … · 2017. 8. 8. · full...

2
VR Planet: Interface for Meta-View and Feet Interaction of VR Contents Kevin Fan * , Liwei Chan, Daiya Kato, Kouta Minamizawa, and Masahiko Inami Keio University, NTT Media Intelligence Laboratories, The University of Tokyo Figure 1: a) 360 video projected as planet, b) stepping on planet view to trigger light, c) audience interact through projected planet, d) walk to different planets to view immersions Keywords: planet view, stereographic projection, feet interaction, VR projection Concepts: Human-centered computing Interaction tech- niques; Visualization; 1 Introduction The emergence of head-mount-displays(HMDs) have enabled us to experience virtual environments in an immersive mean. At the same time, omnidirectional cameras which capture real-life environments in all 360-degree angles in either still image or motion video are also getting attention. Using HMDs, we can view those captured omnidirectional images in immersion, as though we are actually ”being there”. However, as a requirement for immersion, our view of these omnidirectional images in the HMD is usually presented as first-person-view and limited by our natural field of view (FOV), i.e. we only see a fraction of the environment which we are fac- ing, while the rest of the 360-degree environment is hidden from our view. This is even more problematic in telexistence situations where the scene is live so setting a default facing direction for the HMD is impratical. We can often observe people, while wearing HMDs, turn their heads frantically trying to locate interesting oc- currences in the omnidirectional environment they are viewing. Viewing only a fraction of the environment according to our head orientation is a requirement for immersion, but hinders our VR ex- perience. We therefore believe there should be interfaces which allow us to easily view the full VR environment. We propose a ”planet” view for visualizing an meta-view of the environment that a user is immersed in (Figure 1a). Equirectangular images taken from a omnidirectional camera can be transformed into ”planets”. The planets are then placed at users’ feet and becomes visible when look down. The planets capture the full omnidirectional scene so users can easily obtain a whole view of the environment, even those * e-mail:[email protected] Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. c 2017 ACM. Figure 2: Projection methods of both immersive and planet view from 360 omnidirection video that are behind them. As these planets are around the user’s feet, we further suggest natural feet-interaction with the planets to enhance our experience. 2 Implementation Traditionally, omnidirectional images are captured in a rectangu- lar form, formally known as equirectangular projection. When to be viewed in virtual reality (VR), these equirectangular images are mapped to a sphere, with the user’s viewpoint situated at the center of sphere, to provide a fully immersive view. However, as afore- mentioned, we could be missing interesting events in VR using only spherical projection, due to the limited FOV of VR content. Al- ternate visualization methods were thus explored in order to show more information beyond our natural FOV. In a VR context, we

Upload: others

Post on 10-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: VR Planet: Interface for Meta-View and Feet Interaction of VR … · 2017. 8. 8. · Full Immersive: is useful in situations where the full omnidirec-tional video should be viewed

VR Planet: Interface for Meta-View and Feet Interaction of VR Contents

Kevin Fan∗, Liwei Chan, Daiya Kato, Kouta Minamizawa, and Masahiko InamiKeio University, NTT Media Intelligence Laboratories, The University of Tokyo

Figure 1: a) 360 video projected as planet, b) stepping on planet view to trigger light, c) audience interact through projected planet, d) walkto different planets to view immersions

Keywords: planet view, stereographic projection, feet interaction,VR projection

Concepts: •Human-centered computing → Interaction tech-niques; Visualization;

1 Introduction

The emergence of head-mount-displays(HMDs) have enabled us toexperience virtual environments in an immersive mean. At the sametime, omnidirectional cameras which capture real-life environmentsin all 360-degree angles in either still image or motion video arealso getting attention. Using HMDs, we can view those capturedomnidirectional images in immersion, as though we are actually”being there”. However, as a requirement for immersion, our viewof these omnidirectional images in the HMD is usually presentedas first-person-view and limited by our natural field of view (FOV),i.e. we only see a fraction of the environment which we are fac-ing, while the rest of the 360-degree environment is hidden fromour view. This is even more problematic in telexistence situationswhere the scene is live so setting a default facing direction for theHMD is impratical. We can often observe people, while wearingHMDs, turn their heads frantically trying to locate interesting oc-currences in the omnidirectional environment they are viewing.

Viewing only a fraction of the environment according to our headorientation is a requirement for immersion, but hinders our VR ex-perience. We therefore believe there should be interfaces whichallow us to easily view the full VR environment. We propose a”planet” view for visualizing an meta-view of the environment thata user is immersed in (Figure 1a). Equirectangular images takenfrom a omnidirectional camera can be transformed into ”planets”.The planets are then placed at users’ feet and becomes visible whenlook down. The planets capture the full omnidirectional scene sousers can easily obtain a whole view of the environment, even those

∗e-mail:[email protected] to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies are notmade or distributed for profit or commercial advantage and that copies bearthis notice and the full citation on the first page. Copyrights for componentsof this work owned by others than ACM must be honored. Abstracting withcredit is permitted. To copy otherwise, or republish, to post on servers or toredistribute to lists, requires prior specific permission and/or a fee. Requestpermissions from [email protected]. c© 2017 ACM.

Figure 2: Projection methods of both immersive and planet viewfrom 360 omnidirection video

that are behind them. As these planets are around the user’s feet, wefurther suggest natural feet-interaction with the planets to enhanceour experience.

2 Implementation

Traditionally, omnidirectional images are captured in a rectangu-lar form, formally known as equirectangular projection. When tobe viewed in virtual reality (VR), these equirectangular images aremapped to a sphere, with the user’s viewpoint situated at the centerof sphere, to provide a fully immersive view. However, as afore-mentioned, we could be missing interesting events in VR using onlyspherical projection, due to the limited FOV of VR content. Al-ternate visualization methods were thus explored in order to showmore information beyond our natural FOV. In a VR context, we

Page 2: VR Planet: Interface for Meta-View and Feet Interaction of VR … · 2017. 8. 8. · Full Immersive: is useful in situations where the full omnidirec-tional video should be viewed

could increase our FOV by presenting the full 360 degree equirect-angular images in the HMD [Ardouin et al. 2012], while situationsdifferent than VR also benefits from alternate visualization methodsof equirectangular images [Mulloni et al. 2012; Fong 2014]

To solve the problem of limited view in VR while still adhere toimmersion, we propose the use of ”planet” view that capture thefull omnidirectional scene of the environment to be presented atour feet, so users can easily obtain a complete view, including areasbehind them. In our system, we apply stereographic transformationto the omnidirectional images to create a 2D plane representation ofthe spherical view. The resulting ”planet” is a circular meta-view ofthe environment with center located to users’ feet to provide a clearspatial mapping to the spherical view (Figure 2). Users can thus,while in immersive spherical view, glance down to locate eventsthat may be hidden behind them, and look up in the according di-rection to view again in immersion.

An interaction method brought about by these VR planets located atour feet is therefore embodied feet interaction. Using simple mech-anism of tracking our feet with Kinect and visualizing our limb inVR, we can enhance our experience in VR with VR planets. Forexample, physically turning while in immersive VR to view scenesbehind a user could be problematic as that could disorient the user’sdirection in physical environment. Rather, with feet interaction wecould swipe the planets to rotate both the planet and the immersivespherical view to view different directions without physically turn-ing. We implemented three examples of feet interaction: (1) usingfeet to rotate the planet which in turn rotates the spherical view, (2)erasing the planet with feet and (3) walking between multiple plan-ets to transition between the environment that user is immersed in.A merit of feet interaction is that it provides hands free and intuitionin certain interactions, as demonstrated in Step World-in-Minaturewhere users can step on projected environment widgets on the floorto move around the environment [LaViola Jr et al. 2001]. We there-fore envision that our experience in VR could benefit from planetview and feet interaction which lightens the burden of hand inter-action.

3 Visualization

The planet view, when combined with the traditional immersivespherical view, gives three kinds of visualizations. In our system,we transition between these visualizations using feet interaction.

Immerive + Planet view: is a hybrid view which integrates theplanet view with the immersive view by placing the planet at theuser’s feet. The user normally sees the immersive view and canlook around just as in normal omnidirectional VR experience. Asthe user looks down, instead of seeing the floor as that in a usualVR experience, the user would see the planet view of the wholeenvironment.

Full Immersive: is useful in situations where the full omnidirec-tional video should be viewed in immersion, including floor direc-tions, as in the case of aerial 360 videos. While in ”Immersive +Planet” view, users can utilize feet interaction by swiping to erasethe planet on the floor, similar as to how one would erase floor graf-fitis.

Planets view is visualized as only showing planets situated. Whilein ”Immersive + Planet” view, as users look in the direction whereother planets are placed across the virtual environment, the immer-sive sphere would gradually blend out to hint the presence of otherplanets. Then, as users walk out of the planet they were standing on,their view would be transitioned to Planets view where they couldwalk to different planets to trigger different immersions.

4 Application and User Experience

Adding a planet view brings more potential to applications and in-teractions with omnidirectional videos. Here we give three exam-ples.

We could capture the omnidirectional image of the live environmentthat we are physically situated in and transform to a planet view. Byintegrating with video-see-through HMD and house appliances, wecould enhance our interaction with these appliances. For example,instead of reaching for remotes, we can look down at the planetview and utilize our feet to step on appliances we want to trigger(Figure 1b).

VR experience, while immersive to the user, often leave the au-dience to be ”left out” as most of the time they could only watchfrom a monitor of what the user is experiencing. Projecting virtualenvironments to the physical environment could enhance audienceexperience [Saraiji et al. 2015]. By having a projection of the planetview in the physical environment, audience could perceive the kindof 360 degree environment that the user is situated in. Audiencecould also interact with the user; for example, using feet to swipethe physically projected planet to rotate the sphere in user’s virtualenvironment (Figure 1c).

Planet view gives a perception of users situating in a environmentwith many portals. Users could utilize embodied walking to dif-ferent planets to be immersed in different environments (Figure1d). By providing omnidirectional videos of different locations andplacing their planets according to a physical map, we could createthe experience of literally walking from Seattle to London, as an ex-ample. For the demonstration, we will provide planets of differentlocations as well as recorded spots and streaming omnidirectionalvideo from a telepresence robot in the SIGGRAPH venue.

Acknowledgements

This work was supported by NTT Media Intelligence Laboratories.

References

ARDOUIN, J., LECUYER, A., MARCHAL, M., RIANT, C., ANDMARCHAND, E. 2012. Flyviz: a novel display device to providehumans with 360 vision by coupling catadioptric camera withhmd. In Proceedings of the 18th ACM symposium on Virtualreality software and technology, ACM, 41–44.

FONG, C. 2014. An indoor alternative to stereographic spheri-cal panoramas. In Proceedings of Bridges 2014: Mathematics,Music, Art, Architecture, Culture, Tessellations Publishing, 103–110.

LAVIOLA JR, J. J., FELIZ, D. A., KEEFE, D. F., AND ZELEZNIK,R. C. 2001. Hands-free multi-scale navigation in virtual envi-ronments. In Proceedings of the 2001 symposium on Interactive3D graphics, ACM, 9–15.

MULLONI, A., SEICHTER, H., DUNSER, A., BAUDISCH, P.,AND SCHMALSTIEG, D. 2012. 360 panoramic overviews forlocation-based services. In Proceedings of the SIGCHI Con-ference on Human Factors in Computing Systems, ACM, NewYork, NY, USA, CHI ’12, 2565–2568.

SARAIJI, M., FERNANDO, C. L., MINAMIZAWA, K., ANDTACHI, S. 2015. Mutual hand representation for telexistencerobots using projected virtual hands. In Proceedings of the 6thAugmented Human International Conference, ACM, 221–222.