cs 325 introduction to computer graphics 04 / 14 / 2010 instructor: michael eckmann

36
CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Upload: philippa-shelton

Post on 05-Jan-2016

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

CS 325Introduction to Computer Graphics

04 / 14 / 2010

Instructor: Michael Eckmann

Page 2: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Michael Eckmann - Skidmore College - CS 325 - Spring 2010

Today’s Topics• Questions?

• Adding surface detail

– Texture mapping

– Environment mapping

– Bump mapping

• Reducing Intersection calculations in ray tracing

Page 3: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann
Page 4: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Texture mapping• Steps

– Set up the texture to world object mapping so that we'll know which points on the object map to which points on the texture. (We spoke about this yesterday --- see the board.)

– For all scanlines in the image, and for every pixel in the scanline determine what color (from the texture map) to draw, by figuring out where the pixel maps to the object and then where that part of the object maps to the texture. (See handout 3rd to last page).

– Beware of shortcuts as is done in Gouraud shading which computes colors at two endpoints and then linearly interpolates the colors.

• If we do this with texture mapping, the result is noticably incorrect.– This is a problem because equal spaced points on an arbitrary

line in 3d map to unequal spaced points on another line in 3d, when the line is projected perspectively. (see new handout and last page of other other handout for a linearly interpolated texture.)

Page 5: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Texture mapping• Linear interpolation is a problem for perspectively projected surfaces

because when we do the linear interpolation in image space the interpolated value doesn't appropriately match the correct place in the texture map for the world object. Improvements by

– decomposing polygons into smaller ones (triangles) or – most accurately perform the perspective division during interpolation

Page 6: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Texture mapping• Texture mapping can be used to substitute or scale/alter any or all of the

surface's material properties – e.g. color, diffuse reflection coefficient, specular reflection coefficient,

etc.

• We just discussed texture maps that were 1d or 2d. Texture maps can be n-

dimensional.

Page 7: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Environment mapping• Environment mapping is an intermediate surface texture mapping.

Usually the intermediate surface is a sphere.• On the inside of the hollow sphere is a projected view of the

environment (from the center of the sphere.) It is basically a panoramic image of the scene projected onto a sphere.

• The environment map contains colors which can take into account lights in the environment, reflections from other objects, etc.

• This projected view that is used as the environment map is generated ONCE.

• Environment mapping is described in our text as something like “a poor person's ray tracer”.

• When we generate an image USING the environment map, the result will be similar but not as accurate as the ray traced image, but it can be generated quickly (cheaper than ray tracing.)

Page 8: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Environment mapping• To generate the image that is projected onto the inside of the sphere to be

used as our environment map, we can – ray trace (or some other method) onto the sphere instead of onto a plane– take an image of the environment with a camera with a wide angle lens which

can be used as the projected view that will be on the inside of the sphere – or take different planar images if mapping onto the inside of a cube.

– or alternatively one can map any image onto the inside of the sphere if realism is not desired

• To use the environment map to determine the color of some point on a surface, (see picture next slide)

– we map the pixel onto the surface by following a ray through the pixel into our world.

– Then the reflected ray (on the opposite side of the normal vector at that intersection point) is cast and (possibly hits other surfaces ...) and then

– hits the inside of our environment mapping sphere. – The color at that place (or average of colors at that place) on the inside

of the sphere is the color we choose to draw.

Page 9: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann
Page 10: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Environment mapping• From the picture on the last slide, we can see that a pixel maps to an area on the inside of the sphere (which is labelled on the diagram as “Pixel Projection onto Environment Map”) --- resulting color is an average of the colors in that area.

• More generally the environment mapping is on a closed 3d surface (not necessarily a sphere) e.g. a cube is often used.

• If it is on a cube, we would generate 6 images initially that are mapped on the inside of the six faces of the cube.

• All forms of environment mapping can simulate the handling of specular and diffuse reflections.

Page 11: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Environment mapping• Advantages of environment mapping

– create the environment map once then use it for different views– which is fast (can be used in interactive systems at frame rates)– simulates specular and diffuse reflections (if included in the map)– when viewpoint (eye/CoP) changes

• ray tracing requires a full re-execution of the ray tracer• for environment mapping, we still use the same environment

map, but we can quickly generate a complex (lots of reflections, etc.) looking image, by casting rays from the new CoP

Page 12: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Environment mapping• Disadvantages of environment mapping

– problems for concave reflective objects (because inter-reflections are position dependent)

– Also, we use the reflected ray off the surface to determine the intersection with the sphere

• more distortions occur when the object we're rendering is further from the center of the sphere

• distortions will occur because we'll be picking the wrong (but hopefully most of the time close) area on the inside of the sphere to get the color for the surface.

Page 13: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Limitation of texture and environment mapping

• One could create a texture map/environment map of the look of a bumpy surface and then use that to color an object.

• But what's a limitation of these methods– what if lighting in the scene changes?– what if the viewpoint changes?

Page 14: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Bump mapping• Texture mapping and environment mapping cannot handle the view

dependent & lighting dependent nature of shadows and other effects that are seen on surfaces that are bumpy like raisins, oranges, etc.

• To achieve details based on lighting calculations, bump mapping can be used.

• Bump mapping takes a simple smooth mathematical surface like a sphere, or polygon, or other curved surface as its model, but renders it so that it does not appear smooth --- before the lighting calculations are done, the surface normals are perturbed across the surface.

• The normal at a particular surface point is the key to lighting that point of the surface. Therefore, changing the surface normals causes the surface to be rendered as not smooth.

• Let's see some examples of images created with bump mapping techniques.

Page 15: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Bump mapping• The sphere or other mathematically described parametric surface that

we are rendering will be defined as a function P(u,v), where u and v are the parameters and P(u,v) the positions on the surface.

• The actual normal (before bump mapping) of a point on the surface at P(u,v) can be calculated as the cross product of the vectors representing the slopes in the u and v directions.

• The slopes are the partial derivatives of P with respect to u and v (the parameters.) I'll draw a picture on the board for this (unfortunately there is no diagram in the book to describe this).

• Notation:

Pu is the partial derivative of P with respect to u

Pv

is the partial derivative of P with respect to v

• the normal N is: N = Pu x P

v

Page 16: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Bump mapping

N = Pu x P

v

n = N / |N| make n a unit vector

• We use a bump function B(u,v) that generates relatively small values

that will be added to our points from P(u,v).

• We compute a perturbed normal for the point by first – adding B(u,v) to P in the direction of n to get P'

P'(u,v) = P(u,v) + B(u,v) n

– then compute N' (which is the normal at P') to be N' = P'u x P'

v

– now we have to figure out what P'u and P'

v are

• Note: the use here of the ' is NOT notation for derivative.

Page 17: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Bump mapping• We know that

P'(u,v) = P(u,v) + B(u,v) n

so, P'u = partial derivative with respect to u of (P + B n)

which is Pu + B

un + Bn

u

and, P'v = partial derivative with respect to v of (P + B n)

which is Pv + B

vn + Bn

v

• we assume that the magnitude of B is small so we can ignore the last

term in both equations to get approximations like:

P'u = P

u + B

un and P'

v = P

v + B

vn

Page 18: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Bump mapping• We need to calculate N' = P'

u x P'

v

so, N' = ( Pu + B

un ) x ( P

v + B

vn )

which after we do the math is

(Pu x P

v) + B

v(P

u x n) + B

u(n x P

v) + B

uB

v(n x n)

n x n = 0, so we get a good approximation for N' to be

(Pu x P

v) + B

v(P

u x n) + B

u(n x P

v)

then we should normalize (make unit magnitude) the N'

Page 19: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Bump mapping• Now that we have the N' (the perturbed normal) at the point, what do

we do?

Page 20: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

RayTracing / Radiosity• Ray Tracing is a type of direct llumination method in image space. Direct

illumination in image space because the scene that we're rendering is made up of surfaces and lights and we compute the colors of the pixels one at a time.

– If the view moves --- have to ray trace again– If the world moves --- have to ray trace again

• Ray tracing results in some realism but with a few drawbacks+ Handles both diffuse and specular reflections as well as refractions– Compute intensive– Shadows are too crisp

• Radiosity is a type of global illumination method that works in object space.+ If the view moves, we DO NOT have to rerun the radiosity algorithm =

view independent– Only diffuse, no specular reflection (therefore no mirrorlike surfaces)+ Shadows are softer, more realistic+ Color bleeds from surfaces to nearby surfaces

• Radiosity and Ray tracing can be combined to produce a more realistic image

than either one separately.

Page 21: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Radiosity• What follows is an overview of radiosity• We won't go into as much detail as ray tracing• I can point you to some sources if you wish to learn more

about radiosity

Page 22: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Radiosity• Light reflects off of surfaces and onto other surfaces. The amount of

reflected light that hits a surface is determined by – how much it is attenuated and – how much is absorbed before reflection.

• When some light is reflected off of surface S, the color of the surface S colors that reflected light to some extent. This reflected light then hits other surface(s). This effect causes color bleeding from one surface to another. Example image of radiosity “color bleeding”.

• Radiosity is a method of rendering a scene by considering the global illumination of the scene (ray tracing estimates this with the ambient term)

– scene is divided into patches (generally the smaller the better)– a patch will emit (if it's a light source) and reflect light uniformly

over its surface

Page 23: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Radiosity• Radiosity assumes

– surfaces are diffuse emitters and diffuse reflectors– the emitting and reflecting is done uniformly over a “patch”– all light energy in the scene will be conserved --- either absorbed or

reflected

• The radiosity of a surface is computed to be the sum of the light energy

emitted (if a light source) and the (incident) light energy hitting the

surface (coming from elsewhere).

• Attenuation is taken care of by – Form Factors (which represents the fraction of the light that is

transferred from one surface to another) and – Reflectivity values (which represents the fraction of the light that is

reflected from a surface)

Page 24: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Radiosity• The Form Factor (the fraction of light that arrives at one surface from

another) is computed based on– areas of the 2 surfaces involved– angles between the light travelling from one surface to the other and

the surface normals– see text if you would like details

• A form factor is defined between all directed pairs of patches.

Fjk is the form factor from patch j to k

it is the light energy incident on patch k divided by the total light energy leaving patch j

Page 25: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Radiosity• The radiosity equation for a patch k, in a scene with n patches is:

Bk = E

k + p

k * Sumn

j=1 [ B

j F

jk ]

Bk is the radiosity of patch k

Ek is the light emitted from patch k

pk is the reflectivity fraction for patch k (the fraction of incident

light that is reflected in all directions)

Fjk is the form factor from patch j to k

Page 26: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Radiosity• For all n patches in the scene you have a radiosity equation that is based

on the radiosity of all the n patches. To compute the radiosities you have to solve the n simultaneous equations.

– Techniques exist to solve a system of simultaneous equations so not to worry, but it could be expensive

• The more patches we have, the longer the radiosity calculations take, but the prettier our pictures will look (up to some point where reducing the size of our patches will not have any noticeable effect on the picture).

• Radiosity is compute intensive but the surfaces can have their radiosities precomputed if the world does not change.

• The radiosities are view independent. Therefore the radiosity of each patch can be precomputed and stored with the patch.

Page 27: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

RayTracing / Radiosity• Revisit the comparison

• Ray tracing results in some realism but with a few drawbacks+ Handles both diffuse and specular reflections as well as refractions– Compute intensive– Shadows are too crisp– If the viewer moves --- have to ray trace again– If the world moves --- have to ray trace again

• Radiosity is a type of global illumination method that works in object space.

+ If the view moves, we DO NOT have to rerun the radiosity algorithm = view independent

– Only diffuse, no specular reflection (therefore no mirrorlike surfaces)+ Shadows are softer, more realistic+ Color bleeds from surfaces to nearby surfaces (diffuse-diffuse reflections)

Page 28: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

RayTracing / Radiosity• Radiosity and Ray tracing can be combined to produce a more realistic

image than either one separately.

• Radiosity algorithm would execute on the first pass and store the output with the surfaces. Then do Ray Tracing next based on viewer position.

• If the viewer position changes, but the world stays constant, radiosity does NOT need to be rerun (therefore we can precompute the radiosity of a scene.)

• However, radiosity would need to be rerun if the world changes in any way (e.g. lights move, objects move, etc.)

Page 29: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Reducing Intersection calcs• Let's return to ray tracing for a minute

• Our text says that ray-object intersection calculations can make up up-to 95% of the processing time of the ray tracer.

• Reducing this would obviously be worth the effort.

Page 30: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Reducing Intersection calcs• enclose objects that are near each other within a bounding

volume (e.g. a sphere, a cube, etc.)

• do this for all the clusters of objects that are in your world

• then when testing for ray object intersections, first determine which bounding volumes the ray intersects with

• then only among the objects in those bounding volumes that the ray intersects with do we try to compute the ray-object intersection

– all the objects in all the bounding volumes that do not intersect the ray can be ignored

• picture on the board

Page 31: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Reducing Intersection calcs• Space subdivision methods

– picture the world in a large cube which is subdivided into smaller cubes

• cubes that contain surfaces can be subdivided into 8 smaller cubes

• can do this until you get some programmer-defined maximum numbers of allowable surfaces inside a cube

– can store these cubes in a binary partition tree or octtree --- so we can efficiently process the cubes

– shoot a ray and find out which small cube it first intersects with

• if the cube doesn't contain any surfaces continue down the ray to find the next cube it hits and so on

• if there are surface(s) inside the cube determine if there's an intersection if so, that's the one we use; if not, continue with next cube ...

• picture on the board

Page 32: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Reducing Intersection calcs• smaller maximum number of surfaces per cube implies less

ray-object intersection calculations

• however, this leads to an increase in the number of cubes

which increases the calculation to determine the ray path

through the cubes

Page 33: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Ray / Sphere Intersection• A ray is a “line” starting at some point and continuing out to infinity.

P(s) = P0 + R

ds

where P0 is the starting point of the ray, R

d is a unit directional

vector and s is the parameter which represents the distance from

P0

• Let's find the intersection of P(s) with a sphere with radius r and center point P

c.

Page 34: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Ray / Sphere Intersection• Earlier we saw an algebraic solution to the ray-sphere intersection. Here (with

the accompanying drawings on the board) is a geometric solution that allows you to find out earlier in the computations if the ray and sphere do not intersect (thereby reducing unneeded computations.)

• 1. Determine if the ray's origin (P0) is outside or inside of the sphere by:

P0P

c = P

c – P

0.

If the length squared of P0P

c < r2 then P

0 is inside sphere (skip to step 4)

otherwise the it is outside the sphere and we continue onto step 2.

• 2. Find the length L, of the ray from P0 to the point on the ray that is closest to

the center of the sphere. Notice |Rd| = 1, so we can multiply by it ...

L = | P0P

c | cosA = |P

0P

c| |R

d| cosA = P

0P

c • R

d

If L < 0, then ray does not intersect with the sphere, done. Otherwise go to step 3.

Page 35: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Ray / Sphere Intersection• 3. Find the square of the distance, E2 from the point on the ray that is closest to

the center of the sphere to the intersection of the the ray with the sphere.

From pythagorean theorem, r2 = E2 + D2 so E2 = r2 – D2

Also from pythagorean theorem, D2 = |P0P

c|2 – L2

So, E2 = r2 – (|P0P

c|2 – L2)

If E2 < 0, ray does not intersect the sphere, done. Otherwise go to step 4.

• 4. Find s, the parameter of the intersection to use for the ray equation

P(s) = P0 + R

ds

If ray originates outside the sphere then s = L – E If ray originates inside the sphere then s = L + E

• 5. Calculate the intersection using s from step 4, (xi, y

i, z

i) = P

0 + R

ds

Page 36: CS 325 Introduction to Computer Graphics 04 / 14 / 2010 Instructor: Michael Eckmann

Ray / Sphere Intersection• 6. Calculate the normal vector at the intersection

If ray's origin was outside the sphere, N = [(xi-x

c)/r, (y

i-y

c)/r, (z

i-z

c)/r]

If inside the sphere, N = [(xc-x

i)/r, (y

c-y

i)/r, (z

c-z

i)/r]

(this procedure is based on Drew Kessler's notes from Lehigh, 1999)