real-time graphics for vr

18
Real-time Graphics for VR Chapter 23

Upload: geoff

Post on 05-Feb-2016

47 views

Category:

Documents


0 download

DESCRIPTION

Real-time Graphics for VR. Chapter 23. What is it about?. In this part of the course we will look at how to render images given the constrains of VR: we want realistic models, eg scanned humans, radiosity solution of the environment etc (lots of polygons/textures) - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Real-time Graphics for VR

Real-time Graphics for VR

Chapter 23

Page 2: Real-time Graphics for VR

What is it about?

• In this part of the course we will look at how to render images given the constrains of VR:– we want realistic models,

• eg scanned humans, radiosity solution of the environment etc (lots of polygons/textures)

– we need real-time rendering• over 25 frames per second

• often maintaining the frame rate is more important than image quality

Page 3: Real-time Graphics for VR

How can we accelerate the rendering?

• Using graphics hardware that can do the intensive operations in special chips– as processing power increases so do user expectations

• Fine tuning the models– removing overlapping parts of polygons– removing un-needed polygons (undersides etc)– replacing detail with textures

• Improving the graphics pipeline – This is what we will concentrate

Page 4: Real-time Graphics for VR

Making the most of the graphics hardware

• Know the strengths and limitation of your hardware – multipass texturing– display lists, etc

• Don’t compromise the portability, if software to be used on other platforms

• Be aware of the rapid changes in technology– eg bandwidth vs rendering speed

Page 5: Real-time Graphics for VR

What’s wrong with the standard graphics pipeline

• It processes every polygon therefore it does not scale

• According to the statistics, the size of the average 3D model grows more than the processing power

Page 6: Real-time Graphics for VR

We can use several acceleration techniques which can be broadly put into 3 categories:

• Visibility culling– avoid processing anything that will not be visible in

(and thus not contribute to) the final image

• Levels of detail – generate several representations for complex objects

are use the simplest that will give adequate visual result from a given viewpoint

• Image based rendering– replace complex geometry with a texture

Page 7: Real-time Graphics for VR

Constant frame rate

• The techniques above are not enough to assure it

• We need a system load management– it will try to achieve an image with the best

quality possible given within the give frame time

– if there is too much load on the system it will resolve to drastic actions (eg drop objects)

– it’s an NP complete problem

Page 8: Real-time Graphics for VR

The Visibility Problem

• Select the (exact?) set of polygons from the model which are visible from a given viewpoint

• Average number of polygons, visible from a viewpoint, is much smaller than the model size

Page 9: Real-time Graphics for VR

Visibility Culling

• Avoid rendering polygons or objects not contributing to the final image

• We have three different cases of non-visible objects:– those outside the view volume (view volume culling)

– those which are facing away from the user (back face culling)

– those occluded behind other visible objects (occlusion culling)

Page 10: Real-time Graphics for VR

Visibility Culling

Page 11: Real-time Graphics for VR

Visibility methods

• Exact methods– Compute all the polygons which are at least partially

visible but only those

• Approximate methods– Compute most of the visible polygons and possibly

some of the hidden ones

• Conservative methods– Compute all visible polygons plus maybe some hidden

ones

Page 12: Real-time Graphics for VR

View volume culling

• Assuming the scene is stored into some sort of spatial subdivision

• We already saw many earlier in the course, some examples:– hierarchical bounding volumes / spheres – octrees / k-d trees / BSP trees– regular grid

Page 13: Real-time Graphics for VR

View volume culling

• Compare the scene hierarchically against the view volume

• When a region is found to be outside the view volume then all objects inside it can be safely discarded

• If a region is fully inside then render without clipping

• What is the difference with clipping?

Page 14: Real-time Graphics for VR

View volume culling against a bounding volume hierarchy

Page 15: Real-time Graphics for VR

View volume culling against a space partitioning hierarchy

Page 16: Real-time Graphics for VR

View volume culling

• Easy to implement

• A very fast computation

• Very effective result

• Therefore it is included in almost all current rendering systems

Page 17: Real-time Graphics for VR

Back-face culling• Simplest version is to do it per polygon

– just test the normal of each polygon against the direction of view (eg dot product)

• More efficient methods operate on clusters of polygons– group polygons using the direction of their

normals, make a table– compare the view direction against the entries

in this table

Page 18: Real-time Graphics for VR

Occlusion culling

• By far the most complex (and interesting) of the three, both in terms of algorithmic complexity and in terms of implementation

• This is because it depends on the inter-relation of the objects

• Many different algorithms have been proposed, each one is better for different types of models

• What’s the difference with HRS