recap

23
Recap Last lecture we looked at local shading models Diffuse and Phong specular terms Flat and smooth shading Some things were glossed over Light source types and their effects Distant viewer assumption This lecture: Clean up the odds and ends Texture mapping and other mapping effects A little more on the next project

Upload: oren-garcia

Post on 30-Dec-2015

25 views

Category:

Documents


2 download

DESCRIPTION

Recap. Last lecture we looked at local shading models Diffuse and Phong specular terms Flat and smooth shading Some things were glossed over Light source types and their effects Distant viewer assumption This lecture: Clean up the odds and ends Texture mapping and other mapping effects - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Recap

Recap

• Last lecture we looked at local shading models– Diffuse and Phong specular terms

– Flat and smooth shading

• Some things were glossed over– Light source types and their effects

– Distant viewer assumption

• This lecture:– Clean up the odds and ends

– Texture mapping and other mapping effects

– A little more on the next project

Page 2: Recap

Light Sources

• Two aspects of light sources are important for a local shading model:– Where is the light coming from (the L vector)?– How much light is coming (the I values)?

• Various light source types give different answers to the above questions:– Point light source: Light from a specific point– Directional: Light from a specific direction– Spotlight: Light from a specific point with intensity that depends

on the direction– Area light: Light from a continuum of points (later in the course)

Page 3: Recap

Point and Directional Sources

• Point light: L(x) = ||plight - x||– The L vector depends on where the surface point is located– Must be normalized - slightly expensive– OpenGL light at 1,1,1:

• Directional light: L(x) = Llight

– The L vector does not change over points in the world– OpenGL light traveling in direction 1,1,1 (L is in opposite direction):

Glfloat light_position[] = { 1.0, 1.0, 1.0, 1.0 };glLightfv(GL_LIGHT0, GL_POSITION, light_position);

Glfloat light_position[] = { 1.0, 1.0, 1.0, 0.0 };glLightfv(GL_LIGHT0, GL_POSITION, light_position);

Page 4: Recap

Spotlights

• Point source, but intensity depends on L:– Requires a position: the location of the source

– Requires a direction: the center axis of the light

– Requires a cut-off: how broad the beam is

– Requires and exponent: how the light tapers off at the edges of the cone

• Intensity scaled by (L·D)n

glLightfv(GL_LIGHT0, GL_POSITION, light_posn);

glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, light_dir);

glLightfv(GL_LIGHT0, GL_SPOT_CUTOFF, 45.0);

glLightfv(GL_LIGHT0, GL_SPOT_EXPONENT, 1.0);

cut-off

direction

Page 5: Recap

Distant Viewer Approximation

• Specularities require the viewing direction:– V(x) = ||VRP-x||

– Slightly expensive to compute

• Distant viewer approximation uses a global V – Independent of which point is being lit

– Use the view plane normal vector

– Error depends on the nature of the scene• Explored in the homework

Page 6: Recap

Mapping Techniques

• Consider the problem of rendering a soup can– The geometry is very simple - a cylinder– But the color changes rapidly, with sharp edges– With the local shading model, so far, the only place to specify

color is at the vertices– To do a soup tin, would need thousands of polygons for a simple

shape– Same things goes for an orange: simple shape but complex normal

vectors

• Solution: Mapping techniques use simple geometry modified by a mapping of some type

Page 7: Recap

Texture Mapping (Watt 8.1)

• The soup tin is easily described by pasting a label on the plain cylinder

• Texture mapping associates the color of a point with the color in an image: the texture– Soup tin: Each point on the cylinder get the label’s color

• Question: Which point of the texture do we use for a given point on the surface?

• Establish a mapping from surface points to image points– Different mappings are common for different shapes

– We will, for now, just look at triangles (polygons)

Page 8: Recap

Basic Mapping

• The texture lives in a 2D space– Parameterize points in the texture with 2 coordinates: (s,t)

– These are just what we would call (x,y) if we were talking about an image, but we wish to avoid confusion with the world (x,y,z)

• Define the mapping from (x,y,z) in world space to (s,t) in texture space

• With polygons:– Specify (s,t) coordinates at vertices

– Interpolate (s,t) for other points based on given vertices

Page 9: Recap

Basic Mapping

Page 10: Recap

Interpolating Coordinates

(x1, y1), (s1, t1)(x2, y2), (s2, t2)

(x3, y3), (s3, t3)

313

11

13

11 syy

yys

yy

yysR

3

23

22

23

21 syy

yys

yy

yysL

RLR

LL

LR

L sxx

xxs

xx

xxs

1

Page 11: Recap

Basic OpenGL Texturing

• Specify texture coordinates for the polygon:– Use glTexCoord2f(s,t) before each vertex:

• glTexCoord2f(0,0); glVertex3f(x,y,z);

• Create a texture object and fill it with texture data:– glGenTextures(num, &indices) to get identifiers for the objects– glBindTexture(GL_TEXTURE_2D, identifier) to bind the texture

• Following texture commands refer to the bound texture

– glTexParameteri(GL_TEXTURE_2D, …, …) to specify parameters for use when applying the texture

– glTexImage2D(GL_TEXTURE_2D, ….) to specify the texture data (the image itself)

MORE…

Page 12: Recap

Basic OpenGL Texturing (cont)

• Enable texturing: glEnable(GL_TEXTURE_2D)• State how the texture will be used:

– glTexEnvf(…)

• Texturing is done after lighting• You’re ready to go…

Page 13: Recap

Nasty Details

• There are a large range of functions for controlling the layout of texture data:– You must state how the data in your image is arranged

– Eg: glPixelStorei(GL_UNPACK_ALIGNMENT, 1) tells OpenGL not to skip bytes at the end of a row

– You must state how you want the texture to be put in memory: how many bits per “pixel”, which channels,…

• For project 3, when you use this stuff, there will be example code, and the Red Book contains examples

Page 14: Recap

Controlling Different Parameters

• The “pixels” in the texture map may be interpreted as many different things:– As colors in RGB or RGBA format

– As grayscale intensity

– As alpha values only

• The data can be applied to the polygon in many different ways:– Replace: Replace the polygon color with the texture color

– Modulate: Multiply the polygon color with the texture color or intensity

– Similar to compositing: Composite texture with base using operator

Page 15: Recap

Example: Diffuse shading and texture

• Say you want to have an object textured and have the texture appear to be diffusely lit

• Problem: Texture is applied after lighting, so how do you adjust the texture’s brightness?

• Solution:– Make the polygon white and light it normally– Use glTexEnvi(GL_TEXTURE_2D, GL_TEXTURE_ENV_MODE, GL_MODULATE)

– Use GL_RGB for internal format

– Then, texture color is multiplied by surface (fragment) color, and alpha is taken from fragment

Page 16: Recap

Textures and Aliasing

• Textures are subject to aliasing:– An polygon point maps into a texture image, essentially

sampling the texture at a point

• Standard approaches:– Pre-filtering: Filter the texture down before applying it

– Post-filtering: Take multiple pixels from the texture and filter them before applying to the polygon fragment

Page 17: Recap

Mipmapping (Pre-filtering)

• If a textured object is far away, one screen pixel (on an object) may map to many texture pixels– The problem is: how to combine them

• A mipmap is a low resolution version of a texture– Texture is filtered down as a pre-processing step:

• gluBuild2DMipmaps(…)

– When the textured object is far away, use the mipmap chosen so that one image pixel maps to at most four mipmap pixels

– Full set of mipmaps requires double the storage of the original texture

Page 18: Recap

Post-Filtering

• You tell OpenGL what sort of post-filtering to do• When the image pixel is smaller than the texture pixel:

– glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, type)

– Type is GL_LINEAR or GL_NEAREST

• When the image pixel is bigger than the texture pixels:– GL_TEX_MIN_FILTER to specify “minification” filter– Can choose to:

• Take nearest point in base texture, GL_NEAREST• Linearly interpolate nearest 4 pixels in base texture, GL_LINEAR• Take the nearest mipmap and then take nearest or interpolate in that mipmap,

GL_NEAREST_MIPMAP_LINEAR• Interpolate between the two nearest mipmaps using nearest or interpolated

points from each, GL_LINEAR_MIPMAP_LINEAR

Page 19: Recap

Boundaries

• You can control what happens if a point maps to a texture coordinate outside of the texture image

• Repeat: Assume the texture is tiled– glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT)

• Clamp to Clamp to Edge: the texture coordinates are truncated to valid values, and then used

– glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP)

• Can specify a special border color:– glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, R,G,B,A)

Page 20: Recap

Other Texture Stuff

• Texture must be in fast memory - it is accessed for every pixel drawn

• Texture memory is typically limited, so a range of functions are available to manage it

• Specifying texture coordinates can be annoying, so there are functions to automate it

• Sometimes you want to apply multiple textures to the same point: Multitexturing is now in some hardware

Page 21: Recap

Other Texture Stuff

• There is a texture matrix: apply a matrix transformation to texture coordinates before indexing texture

• There are “image processing” operations that can be applied to the pixels coming out of the texture

• There are 1D and 3D textures– Instead of giving 2d texture coordinates, give higher dimensions

– Mapping works essentially the same

– 3D used in visualization applications, such a visualizing MRI or other medical data

– 1D saves memory if the texture is inherently 1D, like stripes

Page 22: Recap

Procedural Texture Mapping

• Instead of looking up an image, pass the texture coordinates to a function that computes the texture value on the fly– Renderman, the Pixar rendering language, does this– Also now becoming available in hardware

• Advantages:– Near-infinite resolution with small storage cost– Idea works for many other things

• Has the disadvantage of being slow

Page 23: Recap

Other Types of Mapping

• Environment mapping looks up incoming illumination in a map– Simulates reflections from shiny surfaces

• Bump-mapping computes an offset to the normal vector at each rendered pixel– No need to put bumps in geometry, but silhouette looks wrong

• Displacement mapping adds an offset to the surface at each point– Like putting bumps on geometry, but simpler to model

• All are available in software renderers like RenderMan compliant renderers

• All these are becoming available in hardware