1 perception and vr mont 104s, fall 2008 lecture 21 more graphics for vr

13
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR

Upload: kevin-morton

Post on 05-Jan-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR

1

Perception and VR

MONT 104S, Fall 2008Lecture 21

More Graphics for VR

Page 2: 1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR

2

Creating Graphics for VRThe synthetic camera model:

1) Define the 3D object(s) in space.

2) Specify lighting, shading and material properties3) Specify camera properties (position, orientation,

projection system, etc).

4) Imaging process:

i) Transformation: Put the object in the camera's coordinate system.

ii) Clipping: Eliminate points outside the camera's field of view.

iii) Projection: Convert from 3D to 2D coordinates.iv) Rasterization: Projected objects represented as

pixels in the frame buffer.

Page 3: 1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR

3

Transformations

Transformations allow us to define an object once, and then move it within the 3D graphics world.

This is useful for placing multiple copies of an object in different positions, without specifying all new point positions.

This is also useful for animations.

Three types of transformations:TranslationRotationScaling

Page 4: 1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR

4

Translation

Suppose we have an object defined by vertices in space: (x, y, z)We want to move it horizontally by some amount, dx.We alter each vertex by adding dx to the x component.We can do the same for y and z:

x' = x + dxy' = y + dyz' = z + dz

Example:P1 = (50, 50, -200)P2 = (50, 100, -200)P3 = (100, 100, -200)P4 = (100, 50, -200)

Shift to the right by 50 units. (Add 50 to each x component):

P1' = (100, 50, -200)P2' = (100, 100, -200)P3' = (150, 100, -200)P4' = (150, 50, -200)

Page 5: 1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR

5

Rotation about the Z axis

•To rotate an object, we must define an axis of rotation, which is the axis about which the object rotates.•Consider rotation of an angle about the Z axis.

The X position changes.The Y position changes.The Z position does not change.

Equations:x' = xcos - ysiny' = xsin + ycosz' = z

Example: Rotate the point (50, 20, -200) by 90o about the Z axis.Rotation about the X or Y axis is similar.

Page 6: 1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR

6

Scaling

To scale in the horizontal direction, multiply the x component by a factor, sx.To scale in the vertical direction, multiply the y component by a factor, sy.To scale in the depth direction, multiply the z component by a factor, sz.

Equations:x' = sxxy' = syyz' = szz

Example: Scale original square by a factor of 2, horizontally.

Page 7: 1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR

7

Hidden Surface Removal--The Painters Algorithm

When rendering objects, we would like to show only the surfaces that are closest to us, and remove those that are behind other surfaces.

•The painter's algorithm renders each surface as it is encountered.•If two surfaces project to the same position, the one that is drawn second "paints over" the first one. The second one will be rendered regardless of the depth of the two surfaces.•This algorithm is more useful if you first sort the objects according to their distances from the camera. Render the most distant ones first.

Page 8: 1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR

8

Hidden Surface Removal--The Z-buffer Algorithm

•The Z-buffer algorithm uses an area of memory (the Z-buffer) to keep track of the depth of each rendered pixel.•As each object is rendered, the depth of each pixel is compared to the current Z value stored for that position.•If the depth of the new pixel is less than the stored depth, the new pixel replaces the old pixel (and the new depth replaces the old stored depth).•If the depth of the new pixel is greater than the stored depth, the new pixel is discarded.

Z = 600Z = 400

Page 9: 1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR

9

Lighting and Shading

•Most real-time graphics use simple models of lighting and shading to generate shaded objects quickly.•The models compute the approximate shaded color of each surface pixel based on the following properties:

•Simulated position of the light source•Orientation of the surface relative to the position of the light source•The color of the light•The material properties of the surface:

•Color and amount of diffuse reflection (dull surface)•Color and amount of specular reflection (shiny surface)•Color and amount of ambient lighting.

Page 10: 1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR

10

More realistic lighting and shading

More realistic shading comes with a time cost. The techniques are not generally used for Virtual Reality because they're slow.

Ray Tracing:•In ray tracing, the light rays are traced from the simulated eye position, through each pixel on the screen, back to the object.•If necessary, the reflected light ray is projected to the light source.

Radiosity: In this technique, the light energy is computed as it reflects off multiple surfaces. This is the slowest technique.

Page 11: 1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR

11

Texture Mapping•Rendering scenes with lots of detail can take too much time.•Imagine drawing all the individual blades of grass in a field.•We can give the appearance of detail by mapping an image onto a surface. This is known as texture mapping.•For example, we can map a picture of grass onto a flat rectangular floor.•This is a fast, easy way to generate detailed looking scenes.

Page 12: 1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR

12

Collision DetectionWhen objects are moving, we need to detect when one object collides with another, so they can interact appropriately.Example: Pong•The ball bounces off the paddles or the walls and changes direction with each bounce.•The computer must keep track of the position of the boundaries of the ball and test for when the ball touches a surface.

For complex objects, we can use a bounding box to calculate the position of the boundaries.

Page 13: 1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR

13

User InterfaceInteractivity is essential to Virtual Reality. The user must be able to interact with objects within the virtual world.

What are some of the tools we use to interact with virtual worlds?Simple computer tools:

Use the mouse to select and dragUse menus to perform actionsUse the keyboard to enter commands.

More Complex tools:Use a game-controller to interact.Track the user's position.Track the position of a data glove.Allow the user to interact with objects using the glove.