virtual reality 3d home applications

Post on 17-Jan-2015

803 Views

Category:

Technology

3 Downloads

Preview:

Click to see full reader

DESCRIPTION

Presentation about one 3D home applications based on the autostereoscopic technology involving lenticular lenses and 2DplusDepth content

TRANSCRIPT

3D Home Technology

Introduction

Capturing 3D scenery

Processing the captured data

Displaying the results for 3D viewing

Conclusion

Several fields in auto- stereoscopic 3D Home technology :

3D home projectors 3D TV 3D computers screens 3d mobile phones...

Focus on the 3D TV for this presentation.

More precisely on one kind of 3D TV using lenticular lenses:

Phillips WOW auto-stereoscopic 3D display products

Two main approches :

Use multiple traditional cameras

One video associated to a per-pixel depth map

Objective : recording the same scene form different points of views.

Advantage :▪ providing exact views for each eye

Disadvantage :▪ can be optimized only for one receiver configuration

(size and number of views of the display)▪ the amount of data necessary to transmit the

information of the two monoscopic color video is quite important.

Objective : using only a 2D picture/video associated to a map representing the depth of the scene.

Advantage:▪ Don’t need a geometric model of the

model/environement▪ Can be done proportionally to screen size.

Disadvantage:▪ Visual artefacts are created during the warping.

Depth-image-based rendering (DIBR)

Techniques to allow depth perception from a monoscopic video + per-pixel depth information.

Create one or more “virtual” views of the 3D scene.

3D Image Warping : “warp” the pixels of the image so they appear in the correct place for a new viewpoint.

Creation of 2 virtual views :

Pinhole camera model : define a mapping from image to rays in space (one center of projection)

Each image-space point can be placed into one-to-one correspondence with a ray that originates from the Euclidean-space origin.

This mapping function from image-space coordinates to rays can be described with a linear system:

Two different centers of projection (C1 and C2) linked by rays to one point X.

Form the right image : X1 determine a ray d1 via the pinhole camera maping with

From the left image we have :

Therefore the coordinate of the point X can be expressed as :

Where t1 and t2 are unknown scaling factors for the vector from the origin to the viewing point which make it coincident with the point X.

t2/t1 = 1 :

McMillan & Bishop Warping Equation :

Per-pixel distance values are used to warp pixels to the proper locationdeppending on the curent eye postion.

Main problem of warping : the horizontal changes in the death map reveals aeras that are occluded in the original view and become visible in some virtual views.

To resolve this problem, filters and extrapolation techniques are used to fill this occlusions.

In pink : the newly exposed aerascorresponding to the new virtual camera

Some examples of pre-processing for the depth map:

Smoothing of Depth Map: To reduce the holes, smoothing the depth map with smooth filters like

average filters or Gaussian filters. They remove the sharp dicontinuities from depth image.

Reshaping the dynamic range of the Depth Map: Expands dynamic range of higher depth values and compress lower

ones improve the rendering quality.

Butterflies 2D plus depth

Butterflies 3D

AI broken 2D plus depth

lenticular lenses

Basics for 3D viewing : two images, one for the left and one for the right eye.

lenticular lenses technology :

The both images are projected in different direction, so each one is correpondign to one eye.

The lenses array generates a parallax difference

Series of viewing spots with transitions

The technology presented before is one of the most developped for the 3D home technology and is predicted to become accessible to the public in the next years.

The capturing and processing techniques used are flexible to fit with the display receiver.

Any 2D scene can be converted in a 3D display.

Sources :

SIGGRAPH ’99 course notes by Leonard McMillan A 3D-TV Approach Using Depth-Image-Based Rendering (DIBR) by CHRISTOPH

FEHN Distance Dependent Depth Filtering in 3D Warping

for 3DTV by Ismaäel Daribo, Christophe Tillier and Beatrice Pesquet-Popescu

Questions ?

top related