€¦  · web viewin the real world, ... max[(r.v) α, 0] by adding a ... in 2-ds, there is but a...

21
www.Bookspar.com | Website for Students | VTU - Notes - Question Papers Page 1 of 21 UNIT-7 SHADING Gradations or shades of color give 2-D images the appearance of being 3-D. The reason for shading of colors is the interaction between light and the surfaces in the models. Flow of discussion in this chapter: Develops separate models of light sources and of the most common light-material interactions. Local lighting models: (As opposed to global lighting models)Shading can be computed and assigned to a point on a surface, independent of any other surfaces in the scene and hence it suits for fast pipeline graphics architecture. These computations depend on only the material properties assigned to the surface, the local geometry of the surface, and the locations and properties of the light sources. Applying shading to polygonal models. A recursive approximation to a sphere is developed that can be used test the shading algorithms. To specify light and material properties are specified in OpenGL and its illustration with sphere approximating program. Methods for handling global lighting effects : ray tracing and radiosity. Light and Matter The color of point seen in an object is determined by multiple interactions among light sources and reflective surfaces. These interactions can be viewed as a recursive process . Consider the simple scene below. Some light from the source that reaches surface A is reflected. Some of this reflected light reaches surface B, and some of it is then reflected back to A, where some of it is again reflected back to B, and so on. This recursive reflection of light between surfaces accounts tor subtle shading effects, such as the bleeding of colors between adjacent surfaces. Mathematically this recursive process results in an integral equation, the rendering equation , which can be used to find the shading of all surfaces in a scene. This equation cannot be solved in general, even by numerical methods. There are various approximate approaches, such as radiosity and ray tracing. Though these approximations are near to rendering equations, they depend on particular types of surfaces and they are slow. Hence Phong reflection model is used which provides a compromise between physical correctness and efficient calculation. Light- sources: Light-emitting (or self-luminous) surfaces producing rays of light. The light rays are traced to see their effect as they interact with reflecting surfaces in the scene. This approach is similar to ray tracing, but considers only single interactions between light sources and surfaces. Two independent parts in modeling are : 1. Modeling the light sources in the scene. 2. Building a reflection model that deals with the interactions between materials and light. Overview of the process considered in a model: www.bookspar.com

Upload: ngotuong

Post on 13-Dec-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

www.Bookspar.com | Website for Students | VTU - Notes - Question Papers Page 1 of 14

U N I T - 7S H A D I N G

Gradations or shades of color give 2-D images the appearance of being 3-D. The reason for shading of colors is the interaction between light and the surfaces in the models. Flow of discussion in this chapter:

Develops separate models of light sources and of the most common light-material interactions. Local lighting models: (As opposed to global lighting models)Shading can be computed and assigned to

a point on a surface, independent of any other surfaces in the scene and hence it suits for fast pipeline graphics architecture. These computations depend on only the material properties assigned to the surface, the local geometry of the surface, and the locations and properties of the light sources.

Applying shading to polygonal models. A recursive approximation to a sphere is developed that can be used test the shading algorithms.

To specify light and material properties are specified in OpenGL and its illustration with sphere approximating program.

Methods for handling global lighting effects: ray tracing and radiosity.Light and Matter

The color of point seen in an object is determined by multiple interactions among light sources and reflective surfaces. These interactions can be viewed as a recursive process. Consider the simple scene below.

Some light from the source that reaches surface A is reflected. Some of this reflected light reaches surface B, and some of it is then reflected back to A, where some of it is again reflected back to B, and so on. This recursive reflection of light between surfaces accounts tor subtle shading effects, such as the bleeding of colors between adjacent surfaces.

Mathematically this recursive process results in an integral equation, the rendering equation, which can be used to find the shading of all surfaces in a scene. This equation cannot be solved in general, even by numerical methods.

There are various approximate approaches, such as radiosity and ray tracing. Though these approximations are near to rendering equations, they depend on particular types of surfaces and they are slow.

Hence Phong reflection model is used which provides a compromise between physical correctness and efficient calculation.

Light- sources: Light-emitting (or self-luminous) surfaces producing rays of light. The light rays are traced to see their effect as they interact with reflecting surfaces in the scene. This approach is similar to ray tracing, but considers only single interactions between light sources and surfaces.

Two independent parts in modeling are :1. Modeling the light sources in the scene. 2. Building a reflection model that deals with the interactions between materials and light.

Overview of the process considered in a model: o Rays of light from a point source are traced as shown. o Viewer sees only the light that leaves the source and reaches the eyes

- perhaps through a complex path and multiple interactions with objects in the scene.

o If a ray of light enters the eye directly from the source, the color of the source is seen. If the ray of light hits a surface that is visible to the viewer, the observed color is based on the interaction between the source and the surface material i.e. the color of the light reflected from the surface toward the eyes is observed.

o In terms of computer graphics, viewer is replaced by the projection plane, as shown below. o Conceptually, the clipping window in this plane is mapped to the screen; thus,

the projection plane can be considered as ruled into rectangles, each corresponding to a pixel. The color of the light source and of the surfaces determines the color of one or more pixels in the frame buffer.

o Only those rays that leave the source and reach the viewer's eye, either directly or through interactions with objects need to be considered. In the case of computer viewing, these are the rays that reach the COP after passing through the clipping rectangle.

o Most rays leaving a source do not contribute to the image and they need not be considered.

www.bookspar.com

www.Bookspar.com | Website for Students | VTU - Notes - Question Papers Page 2 of 14

There may be both single and multiple interactions between rays and objects. It is the nature of these interactions that determines whether an object appears red or brown, light or dark, dull or shiny. When light strikes a surface, some of it is absorbed, and some of it is reflected. If the surface is opaque, reflection and absorption account for all the light striking the surface. If the surface is translucent, some of the light is transmitted through the material and emerges to interact with other objects. These interactions depend on wavelength.An object illuminated by white light appears red because it absorbs most of the incident light but reflects light in the red range of frequencies. A shiny object appears so because its surface is smooth. Conversely, a dull object has a rough surface. The shading of objects also depends on the orientation of their surfaces, a factor that is characterized by the normal vector at each point. These interactions are classified below.3 types of interactions between light and materials:

1. Specular surfaces: appear shiny because most of the light that is reflected is scattered in a narrow range of angles close to the angle of reflection. Mirrors are perfectly specular surfaces; the light from an incoming light ray may be partially absorbed, but all reflected

light emerges at a single angle, obeying the rule that the angle of incidence is equal to the angle of reflection.

2. Diffuse surfaces : They are characterized by reflected light being scattered in all directions. Walls painted with matte or flat paint are diffuse reflectors, as are many natural materials, such as terrain viewed from an airplane or a satellite. Perfectly diffuse surfaces scatter light equally in all directions and thus appear the same to all viewers. .

3. Translucent surfaces allow some light to penetrate the surface and to emerge from another location on the object. This process of refraction characterizes glass and water. Some incident light may also be reflected at the surface.

All these surfaces are modeled in Phong Model.Light SourcesLight can leave a surface through two fundamental processes:

o Self-emission and o Reflection.

Light source is an object that emits light only through internal energy sources. However, a light source, such as a light bulb, can also reflect some light that is incident on it from the surrounding environment. Simple light models neglect this term. A self-emission term can be easily simulated.Consider a light source shown below.

It is an object with a surface. Each point (x, y, z) on the surface can emit light that is characterized by the direction of emission (Ѳ , ϕ) and the intensity of energy emitted at each wavelength λ. Thus, a general light source can be characterized by a six-variable illumination function I (x, y, z, Ѳ , ϕ, λ). Total contribution of the source on the surface can be obtained by integrating over the surface of the source, as shown below.

This integration process accounts for Emission angles that reach the surface and Distance between the source and the surface.

For a distributed light source, such as a light bulb, the evaluation of this integral (using analytic or numerical methods) is difficult. It is easier to model the distributed source with polygons, each of which is a simple source, or with an approximating set of point sources.4 basic types of Color light sources: Light sources are modeled as having three components-red, green, and blue and can be described through a

three-component intensity or luminance function I = [ IrIgIb]each of whose components is the intensity of the

independent red, green, and blue components. Thus, the red component of a light source can be used for the calculation of the red component of the image. Because color-light computations involve three similar but independent calculations, a single scalar equation can be used which may represent any of the three color components.

www.bookspar.com

www.Bookspar.com | Website for Students | VTU - Notes - Question Papers Page 3 of 14

The following four lighting types are sufficient for rendering most simple scenes. Ambient lighting Uniform lighting is called ambient light. In some rooms, such as in certain classrooms or kitchens, the lights have been designed and positioned

to provide uniform illumination throughout the room. Often, such illumination is achieved through large sources that have diffusers whose purpose is to scatter light in all directions.

Such illumination can be simulated, at least in principle, by modeling all the distributed sources, and then integrating the illumination from these sources at each point on a reflecting surface. Making such a model and rendering a scene with it would be a daunting task for a graphics system, especially one for which real-time performance is desirable.

Alternatively, ambient light can be used for uniform lighting. Thus, ambient illumination is characterized by an ambient intensity, Ia, that is identical at every point in the scene.

The ambient source has three color components: I = [ IarIagIab]

Scalar Ia is used to denote anyone of the red, green, or blue components of Ia. Although every point in the scene receives the same illumination from Ia, each surface can reflect this light differently.

Point sources An ideal point source emits light equally in all directions. A point source located at a point po can be characterized by a three-component color matrix:

I = [ Ir (p 0)Ig( p0)Ib( p0)]

The intensity of illumination received from a point source is proportional to the inverse square of the distance between the source and surface.

Hence, at a point p (in figure), the intensity of light received from the point source is

I (po) can be used to denote any of the components of I(po) . Point sources are easier to use but cause

the image less resemblance to physical reality.

Scenes rendered with only point sources tend to have high contrast i.e. objects appear either bright or dark. This high contrast effect can be mitigated by adding ambient light to a scene.

In the real world, the large size of light sources contributes to softer scenes. The distance term also contributes to the harsh renderings with point sources. Although the inverse-

square distance term is correct for point sources, in practice it is usually replaced by a term of the form (a + bd + cd2)-1, where d is the distance between p and po. The constants a, b, and c can be chosen to soften the lighting.

If the light source is far from the surfaces in the scene, the intensity of the light from the source is sufficiently uniform that the distance term is constant over the surfaces.

Spotlights Spotlights are characterized by a narrow range of angles through which light is emitted. A simple spotlight can be constructed from a point source by limiting the angles at which light from the

source can be seen. A cone whose apex is at ps, which points in the direction Is, and whose width is

determined by an angle Ѳ, as shown in figure, can be used. If Ѳ = 180, the spotlight becomes a point source. More realistic spotlights are characterized by the distribution of light within the cone- usually with most of the light concentrated in the center of the cone.

Thus, the intensity is a function of the angle ϕ between the direction of the source and a vector s to a point on the surface (as long as this angle is less than Ѳ; figure).

This function is usually defined by coseϕ, where the exponent e (figure) determines how rapidly the light intensity drops off. Cosines are convenient functions for lighting

www.bookspar.com

www.Bookspar.com | Website for Students | VTU - Notes - Question Papers Page 4 of 14

calculations. If both s and l are unit-length vectors, the cosine can be computed with the dot product cosϕ = s. l, a calculation that requires only three multiplications and two additions.

Distant light Most shading calculations require the direction from the point on the surface to the light

source. This vector must be computed repeatedly to calculate the intensity at each point on the surface. This is a significant part of the shading calculation.

However, if the light source is far from the surface, the vector does not change for different points on the surface just as the light from the sun strikes all objects that are in

close proximity to one other at the same angle. Then a point source of light can be replaced with a source that illuminates objects with parallel rays of

light -a parallel source. In practice, the calculations for distant light sources are similar to the calculations for parallel projections; they replace the location of the light source with the direction of the light source.

Hence, in homogeneous coordinates, a point light source at po is represented internally as a 4-D column matrix:

p0 = [ xyz1 ]

In contrast, the distant light source is described by a direction vector whose representation in

homogeneous coordinates is the matrix p0 = [ xyz0 ]

The graphics system can carry out rendering calculations more efficiently for distant light sources than for near ones. Of course, a scene rendered with distant light sources looks different from a scene rendered with near sources. OpenGL allows both types of sources.

The Phong Reflection Model It uses the four vectors shown in figure, to calculate a color for an arbitrary

point p on a surface. If the surface is curved, all four vectors can change from point to point.

Vector n is the normal at p. Vector v is in the direction from p to the viewer or COP. Vector l is in the direction of a line from p to an arbitrary point on the source

for a distributed light source or to the point light source. Vector r is in the direction that a perfectly reflected ray from l would take. The r is determined by n and

l. The Phong model supports the three types of material-light interactions:

Ambient, Diffuse, and SpecularConsider a set of point sources. Each source let have separate ambient, diffuse, and specular components for each of the three primary colors. Thus at any point p on the surface, a 3 x 3 illumination matrix for the i th light source is:

The first row contains the ambient intensities for the red, green, and blue terms from source i. The second row contains the diffuse terms; the third contains the specular terms.

The amount of the incident light that is reflected, at the point of interest, can be computed. Thus, for each point, the computed matrix of reflection is:

The value of Ri depends on the material properties, the orientation of the surface, the direction of the light source, and the distance between the light source and the viewer.

Then contribution for each color source can be computed by adding the ambient, diffuse, and specular components. E.g.Red intensity that is seen at p from source i is

Total intensity is obtained by adding the contributions of all sources and, possibly, a global ambient term. Thus, the red term is

www.bookspar.com

www.Bookspar.com | Website for Students | VTU - Notes - Question Papers Page 5 of 14

where Iar is the red component of the global ambient light.

Notation can be simplified by noting that the necessary computations are the same for each source and for each primary color. They differ depending on whether the ambient, diffuse, or specular terms are considered. Hence, subscripts i, r, g, and b can be eliminated. Then I = Ia+ Id+ Is = LaRa + LdRd + LsRs, with the understanding that the computation will be done for each of the primaries and each source; the global ambient term can be added at the end.Ambient ReflectionThe intensity of ambient light Ia is the same at every point on the surface. Some of this light is absorbed, and some is reflected. The amount reflected is given by the ambient reflection coefficient, Ra = ka. Because only a positive fraction of the light is reflected, 0 ≤ Ka ≤ 1 and thus Ia= Ka La.Here, La can be any of the individual light sources, or it can be a global ambient term. A surface has, of course, three ambient coefficients-Kar, Kag, and Kab – and they can differ. Hence, for example, a sphere appears yellow under white ambient light if its blue ambient coefficient is small and its red and green coefficients are large.Diffuse ReflectionA perfectly diffuse reflector scatters the light that it reflects equally in all directions. Hence, such a surface appears the same to all viewers. However, the amount of light reflected depends both on the material, because some of the incoming light is absorbed, and on the position of the light source relative to the surface. Diffuse reflections are characterized by rough surfaces. If a cross section of a diffuse surface is magnified, the image is shown below.

Rays of light that hit the surface at only slightly different angles are reflected back at markedly different angles. Perfectly diffuse surfaces are so rough that there is no preferred angle of reflection. Such surfaces, sometimes called Lambertian surfaces, can be modeled mathematically with Lambert's law.

Consider a diffuse planar surface shown below, illuminated by the sun (point source).The surface is brightest at noon and dimmest at dawn and dusk because, according to Lambert's law, only the vertical component of the incoming light is seen.

One way to understand this law is to consider a small parallel light source striking a plane, as shown. As the source is lowered in the (artificial) sky, the same amount of light is spread over larger areas (d/cosѲ), and the surface appears dimmer (Rd reduces by cosѲ).

Returning to the initial point source, diffuse reflections can be characterized

mathematically. Lambert's law states that , where Ѳ is the angle between the normal at the point of intersection, n and the direction of the light source l. If both l and n are unit-length vectors, then cos Ѳ = l. n.If a reflection coefficient Kd that represents the fraction of incoming diffuse light that is reflected, is added, then diffuse reflection term: ld = Kd (l. n) Ld.If a distance term, is to be incorporated, to account for attenuation as the light travels a distance d from the source to the surface, following quadratic attenuation term can be used:

If the facet (surface point) is aimed away from the eye (> 90°) this dot product and hence the Id is negative. But model evaluates Id to 0 with the modified computation replacing (l.n) with max[(l.n), 0].

For Ѳ near 0, brightness varies only slightly with angle, because cosine changes slightly there. As Ѳ approaches 90°, however, the brightness falls rapidly to 0.

www.bookspar.com

www.Bookspar.com | Website for Students | VTU - Notes - Question Papers Page 6 of 14

Specular ReflectionThe images with only ambient and diffuse reflections appear shaded and 3-D, but with all the surfaces dull, somewhat like chalk because the highlights reflected from shiny objects are not considered. These highlights usually show a color different from the color of the reflected ambient and diffuse light. E.g. A red ball, viewed under white light, has a white highlight that is the reflection of some of light from the source in the direction of the viewer.

Whereas a diffuse surface is rough, a specular surface is smooth. The smoother the surface is, the more it resembles a mirror; Figure shows that as the surface gets smoother, the reflected light is concentrated in a smaller range of angles, centered about the angle of a perfect reflector-a mirror or a perfectly specular surface.

Modeling specular surfaces realistically can be complex, because the pattern by which the light is scattered is not symmetric, it depends on the wavelength of the incident light, and it changes with the reflection angle.

Phong proposed an approximate model that can be computed with only a slight increase over the work done for diffuse surfaces. The model adds a term for specular reflection. Hence, the surface can be considered as being rough for the diffuse term and smooth for the specular term. The amount of light that the viewer sees depends on the angle ϕ between r, the direction of a perfect reflector, and v, the direction of the viewer.

The Phong model uses the equation The coefficient Ks (0 ≤ Ks ≤ 1) is the fraction of the incoming specular light that is reflected. The exponent α is a shininess coefficient. Figure below shows how, as α is increased, the reflected light is concentrated in a narrower region, centered on the angle of a perfect reflector.

In the limit, as α goes to infinity, it is a mirror surface; values in the range 100 to 500 correspond to most metallic surfaces, and smaller values (<100) correspond to materials that show broad highlights.The computational advantage of the Phong model is that, if the r and n are normalized to unit length, the dot product can again be used, and the specular term becomes

If the facet (surface point) is aimed away from the eye (> 90°) this dot product and hence the Is is negative. But model evaluates Is to 0 with the modified computation: Is = ks Ls max[(r.v)α, 0]

By adding a distance term, the Phong model is written:

Use of the Halfway Vector (Blinn Phong / Modified Phong Model)If the Phong model with specular reflections is used in rendering, the dot product r . v should be recalculated at every point on the surface. An approximation can be obtained by using the unit vector half-way between the

viewer vector and the light-source vector: h = l+v|l+v|

Figure below shows all five vectors. Here ψ is the angle between n and h, the half-angle. When v lies in the same plane as do 1, n, and r, it can be shown that 2 ψ = ϕ.If r . v is replaced with n . h, the calculation of r can be avoided.

Computation of VectorsMost of the calculations for rendering a scene involve the determination of the required vectors and dot products. For each special case, simplifications are possible. E.g. if the surface is a flat polygon, the normal is the same at all points on the surface. If the light source is far from the surface, the light direction is the same at all points.Normal VectorsFor smooth surfaces, the normal vector to the surface exists at every point and gives the local orientation of the surface. Its calculation depends on how the surface is represented mathematically.

www.bookspar.com

www.Bookspar.com | Website for Students | VTU - Notes - Question Papers Page 7 of 14

Computation of normal for the plane:A plane can be described by the equation ax + by + cz + d = 0.In terms of the normal to the plane, n, and a point, po on the plane, the same equation is, n . (p - po) = 0,where p is any point (x, y, z) on the plane.

Comparing the two forms, the vector n in homogeneous coordinates, is given by = [abc0]Assume three non-collinear points - po, p1, p2 in this plane (they are sufficient to determine it uniquely). Thevectors p2 –p0 and p1-p0 are parallel to the plane, and their cross product can be used to find the normaln = (p2 –p0) x (p1-p0).Computation of normal for the unit sphere centered at the origin:The usual equation tor this sphere is the implicit equation f(x, y, z) = x2+ y2+z2−1=0or, in vector form, f (p) =p . p - 1 = 0.The normal is given by the gradient vector, which is represented by the column matrix,

The sphere could also be represented in parametric form. In this form, the x, y, and z values of a point on the sphere are represented independently in terms of two parameters u and v:x = x(u, v), y = y(u, v), z = z(u, v).For a particular surface, there may be multiple parametric representations. One parametric representation for the sphere isx(u, v) =cos u sin v, y(u, v) = cos u cos v, z(u, v) = sin u.

By varying u and v vary in the range −π

2 < u < π2 , −π < v < π , all the points on the sphere can be

obtained. Then the normal can be obtained as follows:Let sphere of unit radius be = x2+ y2+z2−1=0Its normal vector is ▼f = 2xi + 2yj + 2zk.∴the unit normal vector n = xi+yj+zk

n = pIn OpenGL, a normal can associated with a vertex through functions such as

glNorma13f(nx, ny, nz);glNorma13fv(pointer_to_norma1);

Normals are modal variables. If a normal is defined before a sequence of vertices through glVertex calls, this normal is associated with all the vertices, and is used for the lighting calculations at all the vertices. But the problem is that programmer is required to determine these normals.Angle of ReflectionThe calculated normal at a point and the direction of the light source can be used to compute the direction of a perfect reflection. An ideal mirror is characterized by the following statement: The angle of incidence is equal to the angle of reflection. These angles are as pictured below.

The angle of incidence is the angle between the normal and the light source (assumed to be a point source); the angle of reflection is the angle between the normal and the direction in which the light is reflected. In 2-Ds, there is but a single angle satisfying the angle condition. In 3Ds, there is an infinite number of angles satisfying the condition. Hence the following statement must also be added: At a point p on the surface, the incoming light ray, the reflected light ray and the normal at the point must all lie in the

same plane. These two conditions are sufficient to determine r from n and l. The direction of r is important than its magnitude. Assuming both l and n have been normalized (to have unit vectors) such that |l| = |n|=1, simplifies rendering calculations. It is needed that |r|=1If Ѳi = Ѳr, then, cos Ѳi = cos Ѳr.Using the dot product, the angle condition is cos Ѳi = l . n = cos Ѳr =n . r(l+r).n = 2(l.n)

= 2(l.n)(n.n) Note: where n.n = 1(l+r).n = 2(l.n)n . n∴l+r = 2(l.n)n∴ r= 2(l.n)n - lPolygonal ShadingComputing surface color using an illumination model such as the Phong formula at every point of interest can be very expensive. It is even unnecessary when the image is used to preview the scene. To circumvent this problem, the formula is applied at full scale only at selected surface points. Then the techniques such as color

www.bookspar.com

www.Bookspar.com | Website for Students | VTU - Notes - Question Papers Page 8 of 14

interpolation and surface-normal interpolation, as explained below, can be used to shade other surface points. Three ways to shade the polygons: The three vectors – l , n, and v - can vary from point to point on a surface.

Flat/Constant shadingFor a flat polygon n is constant from point to point on a surface. For a distant viewer, v is constant over the

polygon. Finally, if the light source is distant, l is constant. Here, distant could be interpreted as The source is at infinity: Then shading equations and their implementation needs to be suitably adjusted.

E.g. The location of the source must be changed to the direction of the source. The size of the polygon relative to the distance of the polygon from the source or viewer.If the three vectors are constant, then the shading calculation needs to be carried out only once for each

polygon, and each point on the polygon is assigned the same shade. This technique is known as flat shading.In OpenGL, flat shading is specified through

glShadeModel(GL_FLAT);If flat shading is in effect, OpenGL uses the normal associated with the first vertex of a single polygon

for the shading calculation. For primitives such as a triangle strip, OpenGL uses the normal of the third vertex for the first triangle, the normal of the fourth for the second, and so on. Similar rules hold for other primitives, such as quadrilateral strips.

Considering a mesh constructed using polygons. Flat shading will show differences in shading for the polygons in that mesh. E.g.

o If the light sources and viewer are near the polygon, the vectors l and v will be different for each polygon. However, if the polygonal mesh has been designed to model smooth surface, flat shading will almost always be disappointing, because shading between adjacent polygons is very small.

o The human visual system has a remarkable sensitivity to small differences in light intensity, due to a property known as lateral inhibition. Hence shading with intensities increasing in sequence will appear with overshooting brightness on one side and undershooting on the other.

o Shading with large differences at the edges of polygons may produce stripes along the edges of the polygon. These stripes are called Mach bands. This can be avoided by employing smoother shading techniques that do not produce such large differences.

Interpolative / Gouraud shading OpenGL interpolates colors assigned to vertices across a polygon (e.g. in rotating cube program). OpenGL interpolates colors for other primitives such as lines, if shading model is smooth via

glShadeModel(GL_SMOOTH); If both smooth shading and lighting are enabled, and the normal is assigned to each vertex of the

polygon to be shaded, then the lighting calculation is made at each vertex, determining a vertex color, using the material properties and the vectors v and 1 computed for each vertex. If the light source is distant, and either the viewer is distant or there are no specular reflections, then interpolative shading shades a polygon in a constant color.

Consider the mesh with polygons. In this case, multiple polygons meet at interior vertices of the mesh, each of which has its own normal. Hence the normal at the vertex is discontinuous. Then the idea of a normal existing at a vertex may complicate the mathematics.

Gouraud realized that the normal at the vertex n could be defined in a way to achieve smoother shading through interpolation. Consider an interior vertex where four polygons meet (figure). Each has its own normal.

In Gouraud shading, the normal at a vertex is defined to be the normalized average of the normals of the polygons that share the vertex.

E.g. The vertex normal is given by n = n1+n2+n 3+n 4|n1+n2+n 3+n 4|

In OpenGL, Gouraud shading (Intensity / Color interpolation shading) is achieved just by setting vertex normals correctly. To find the normals that are averaged, a data structure can be used to represent the mesh. Traversing this data structure can generate the vertices with the averaged normals. Such a data structure should contain, minimally, polygons, vertices, normals, and material

properties. The key information that must be represented in the data structure is which polygons meet at each vertex.

Phong shading (normal-vector interpolation-shading or per-fragment shading)Even the smoothness introduced by Gouraud shading may not prevent the appearance of Mach bands. Phong proposed that, instead of interpolating vertex intensities, as in Gouraud shading, normals across each polygon can be interpolated. Consider a polygon that shares edges and vertices with other polygons in the mesh, as shown below.

www.bookspar.com

www.Bookspar.com | Website for Students | VTU - Notes - Question Papers Page 9 of 14

Vertex normals can be computed by interpolating over the normals of the polygons that share the vertex.

Then bilinear interpolation can be used to interpolate the normals over the polygon. Consider the figure below.

The interpolated normals at vertices A and B can be used to interpolate normals along the edge between them:n(α) = (l - α )nA + α nB.

Similar interpolation can be done on all the edges. The normal at any interior point can be obtained from points on the edges by n(α, β) = (l - β )nC + β nD.Once the normal at each point is obtained, an independent shading calculation can be done. Usually, this process can be combined with the scan conversion of the polygon, so that the line between C and D projects to a scan line in the frame buffer. Phong shading produces renderings smoother than those of Gouraud shading, but at a significantly greater computational cost. There are various hardware implementations for Gouraud shading, and they require only a slight increase in the time required for rendering; neither is true, however, for Phong shading. Consequently, Phong shading is almost always done offline.

Approximation of a Sphere by Recursive Subdivision To illustrate the interactions between shading parameters and polygonal approximations to curved

surfaces, here, a sphere is used as curved surface to illustrate shading calculations and the sphere is approximated using polygons.

Recursive subdivision is introduced which is a powerful technique for generating approximations to curves and surfaces to any desired level of accuracy.

Step 1A tetrahedron (Although any regular polyhedron can be used) is used whose facets could be divided initially into triangles. The regular tetrahedron is composed of four equilateral triangles, determined by four vertices say, (0,0,1), (0, 2√2/3, -1/3), (-√6/3, -√2/3, -1/3), (√6/3,-√2/3, -1/3). All four vertices lie on the unit sphere, centered at the origin. E.g.1: To get a first approximation by drawing a wireframe for the tetrahedron: Four vertices are defined globally using the point type:point3 v[4]={{0.0, 0.0, 1.0}, {0.0, 0.942809, -0.333333},

{-0.816497. -0.471405, -0.333333}, {0.816497, -0.471405, -0.333333}};Triangles can be drawn via the function

void triangle( point3 a, point3 b, point3 c){glBegin(GL_LINE_LOOP);

glVertex3fv(a);glVertex3fv(b);glVertex3fv(c);

glEnd( );}The tetrahedron can be drawn byvoid tetrahedron( ){

triangle(v[0], v[1], v[2]);,triangle(v[3], v[2], v[1]);triangle(v[0], v[3], v[1]);triangle(v[0], v[2], v[3]);

}The order of vertices obeys the right-hand rule, so the code can be converted to draw shaded polygons with little difficulty. Display with this will be a mere tetrahedron with no approximation to a sphere.Step 2Hence a closer approximation to the sphere can be obtained by subdividing each facet of the tetrahedron into smaller triangles. Subdividing into triangles will ensure that all the new facets will be flat. Hence the bisectors of the sides of the triangle are connected forming four equilateral triangles. After each facet is subdivided, the four new triangles in each facet will still be in the same plane as the original triangle. The new vertices created by bisection can be moved to the unit sphere by normalizing each bisected vertex, using a simple normalization function such as

www.bookspar.com

www.Bookspar.com | Website for Students | VTU - Notes - Question Papers Page 10 of 14

void normal(point3 p){

double d = 0.0;int i;for ( i =0; i <3 ; i++) d += p[i]*p[i];d=sqrt(d) ;if (d > 0.0) for(i = 0; i < 3; i++) p[i]/=d;

}Step 3:A single triangle, defined by the vertices numbered a, b, and c, can be subdivided by the following code:point3 v1, v2, v3;int j;for (j=0; j < 3; j++) vl[j]=v[a][j]+v[b][j];normal (vl);for (j=0; j<3; j++) v2[j]=v[a][j]+v[c][j];normal (v2);for (j=0; j<3; j++) v3[j]=v[c][j]+v[b][j];normal (v3);triangle(v[a], v2, v1);triangle(v[c], v3, v2);triangle(v[b], v1, v3);triangle(vl, v2, v3);Step 4:This code can be used in the tetrahedron routine to generate 16 triangles rather than 4, but there must be a provision to repeat the subdivision process n times to generate successively closer approximations to the sphere. By calling the subdivision routine recursively, the number of subdivisions be controlled.Hence tetrahedron routine is made depend on the depth of recursion by adding an argument n:void tetrahedron(int n){

divide_triangle(v[0], v[l], v[2], n);divide_triangle(v[3], v[2], v[l], n );divide_triangle(v[0], v[3], v[l], n );divide_triangle(v[0], v[2], v[3], n );

}Step 5:The divide_triangle function calls itself to subdivide further if n is greater than zero, but generates triangles if n has been reduced to zero. Here is the code:divide_triangle(point3 a, point3 b, point3 c, int n){

point3 v1, v2,v3;int j;if (n > 0)

{for(j=0; j<3; j++) vl[j]=a[j]+b[j];normal(vl);for(j=0; j<3; j++) v2[j]=a[j]+c[j];normal(v2);for(j=0;j<3; j++) v3[j]=c[j]+b[j];normal (v3);divide_triangle(a, v2, v1, n-l);divide_triangle(c, v3, v2, n-l);divide_triangle(b, v1, v3, n-1);divide_triangle(vl, v2, v3, n-l);}

else triangle(a. b. c);}Figure above shows an approximation to the sphere drawn with this code. Lighting and shading can be added to the sphere approximation, after the following section.

Light Sources in OpenGL

www.bookspar.com

www.Bookspar.com | Website for Students | VTU - Notes - Question Papers Page 11 of 14

OpenGL supports the four types of light sources and allows at least eight light sources in a program. Each must be individually specified and enabled. Although there are many parameters that must be specified, they are exactly the parameters required by the Phong model. The OpenGL functions

glLightfv(source, parameter, pointer_to_array);glLightf(source, parameter, value);set the required vector and scalar parameters, respectively. There are four vector parameters that can be set: the position (or direction) of the light source, and the amount of ambient, diffuse, and specular light associated with the source.E.g.

To specify the first source GL_LIGHT0, and to locate it at the point (1.0,2.0,3.0). Its position can be stored as a point in homogeneous coordinates:

GLfloat light0_pos[ ]={l.0, 2.0, 3.0, 1.0}; With the fourth component set to zero, the point source becomes a distant source with direction vector

GLfloat light0_dir[ ]={l.0, 2.0, 3.0, 0.0}; To add a white specular component and red ambient and diffuse components to the single light source:

GLfloat diffuse0[]={l.0, 0.0, 0.0, 1.0};GLfloat ambient0[]={1.0, 0.0, 0.0, 1.0};GLfloat specular0[]={1.0, 1.0, 1.0, 1.0};glEnable(GL_LIGHTING);glEnable(GL_LIGHT0);glLightfv(GL_LIGHT0, GL_POSITION, light0_pos);glLightfv(GL_LIGHT0, GL_AMBIENT, ambient0);glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuse0);glLightfv(GL_LIGHT0, GL_SPECULAR, specular0);

Both lighting and the particular source must be enabled. To add a global ambient term that is independent of any of the sources. E.g. To add a small amount of

white light: GLfloat globa1_ambient[]={0.1, 0.1, 0.1, 1.0};glLightModelfv(GL_LIGHT_MODEL_AMBIENT, global_ambient);

The distance terms are based on the distance-attenuation model, f (d) = 1 / (a + bd + cd2) which contains constant, linear, and quadratic terms. These terms are set individually via glLightf; E.g.

glLightf(GL_LIGHT0, GL_CONSTANT_ATTENUATION, a); To convert a positional source to a spotlight by choosing the spotlight direction

(GL_SPOT_DIRECTION), the exponent (GL_SPOT_EXPONENT), and the angle (GL_SPOT_CUTOFF).All three are specified by glLightf and glLightfv.

GL_LIGHT_MODEL_LOCAL_VIEWER and GL_LIGHT_MODEL_TWO_SIDE: Lighting calculations can be time-consuming. If the viewer is assumed to be an infinite distance from the scene, then the calculation of reflections is easier, because the direction to the viewer from any point in the scene is unchanged.The default in OpenGL is to make this approximation, because its effect on many scenes is minimal. To make full light calculation using the true position of the viewer, model can be changed using:

glLightModeli(GL_LIGHT_MODEL_LOCAL_VIEWER, GL_TRUE);A surface has both a front face and a back face. For polygons, these faces are determined by the order in which the vertices are specified, using the right-hand rule. For most objects, since only the front faces are seen, the shading of the back-facing surfaces is neglected. To ensure that the properly placed viewer may see a back face, both the front and back faces correctly. The following function can be used for the purpose:

glLightModeli(GL_LIGHT_MODEL_TWO_SIDED, GL_TRUE); Light sources are objects, just like polygons and points. Hence, light sources are affected by the

OpenGL model-view transformation. They can be defined at the desired position or in a convenient position and can be moved to the desired position by the model-view transformation. The basic rule governing object placement is that vertices are converted to eye coordinates by the model-view transformation that is in effect at the time the vertices are defined. Thus, by careful placement of the light-source specifications relative to the definition of other geometric objects, light sources that remain stationary while the objects move, light sources that move while objects remain stationary, and light sources that move with the objects, can be created.

Specification of Materials in OpenGLMaterial properties in OpenGL match up directly with the supported light sources and with the Phong reflection model. Different material properties can be also be specified for the front and faces of a surface. All the reflection parameters are specified through the functions

glMaterialfv(face, type, pointer_to_array);glMaterialf(face, value);

www.bookspar.com

www.Bookspar.com | Website for Students | VTU - Notes - Question Papers Page 12 of 14

E.g. To define ambient, diffuse, and specular reflectivity coefficients (ka, kd, ks) for each primary color:

GLfloat ambient[ ] = {0.2, 0.2, 0.2, 1.0};GLfloat diffuse[ ] = {1.0, 0.8, 0.0, 1.0};GLfloat specular[ ]={l.0, 1.0, 1.0, 1.0};

Here, small amount of white ambient reflectivity, yellow diffuse properties, and white specular reflections are defined.

To set the material properties for the front and back faces by the callsglMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, ambient);glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, diffuse);glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, specular);

Note that if both the specular and diffuse coefficients are the same (as is often the case), both can be specified by using GL_DIFFUSE_AND_SPECULAR for the type parameter. To specify different front- and back-face properties, GL_FRONT and GL_BACK can be used.

The shininess of a surface - the exponent in the specular-reflection term - is specified by glMaterialf. E.g. glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, 100.0);

Material properties are modal: Their values remain the same until changed, and, when changed, affect only surfaces defined after the change. Surfaces having an emissive component that characterizes self-luminous sources can also be defined in OpenGL. That method is useful if a light source is to be appeared in the image. This term is unaffected by any of the light sources, and it does not affect any other surfaces. It adds a fixed color to the surfaces and is specified in a manner similar to other material properties. E.g.

GLfloat emission[]={0.0, 0.3, 0.3, 1.0};glMaterialfv(GL_FRONT_AND_BACK, GL_EMISSION, emission);

defines a small amount of blue-green (cyan) emission.Shading of the Sphere ModelTo shade the approximate spheres with OpenGL's shading model, normals must be assigned. E.g.2 To flat shade each triangle, using three vertices to determine a normal, and then to assign this normal to the first vertex: The cross product is used and then the result is normalized. A cross-product function is:cross(point3 a, point3 b, point3 c, point3 d);{

d[0]=(b[1]-a[1])*(c[2]-a[2])-(b[2]-a[2])*(c[1]-a[1]);d[1]=(b[2]-a[2])*(c[0]-a[0])-(b[0]-a[0])*(c[2]-a[2]);d[2]=(b[0]-a[0])*(c[1]-a[1])-(b[1]-a[1])*(c[0]-a[0]);normal (d);

}Assuming that light sources have been defined and enabled, triangle routine can be changed to produce shaded spheres:void triangle(point3 a, point3 b, point3 c){

point3 n;cross(a, b, c, n);glBegin(GL_POLYGON);

glNorma13fv(n);glVertex3fv(a);glVertex3fv(b);glVertex3fv(c);

glEnd( );}The result of flat shading the spheres is shown in above figure.

Note: As the number of sub-divisions is increased the interior of the spheres appears smooth. But the edges of polygons around the outside of the sphere image are still visible. This type of outline is called a silhouette edge.

E.g. 3 To Interpolative shading : It can be easily applied to the sphere models, because the normal at each point p on the surface is in the direction from the origin to p. Then the true normal can be assigned to each vertex, and OpenGL will interpolate the shades at these vertices across each triangle. Thus, triangle is changed tovoid triangle(point3 a. point3 b, point3 c){

point3 n;int i;

glBegin(GL_POLYGON);for (i=0; i < 3; i++) n[i] = a[i];normal (n);glNorma13fv(n);

www.bookspar.com

www.Bookspar.com | Website for Students | VTU - Notes - Question Papers Page 13 of 14

glVertex3fv(a);for(i=0; i < 3; i++) n[i]=b[i];normal (n);glNorma13fv(n);glVertex3fv(b);for (i=0; i <3; i++) n[i]=c[i];normal(n);glNorma13fv(n) ;glVertex3fv(c);

glEnd( );}The results of this definition of the normals are shown in figure.Global RenderingLimitations imposed by the local lighting model:

If an array of spheres is illuminated by a distant source, the spheres close to the source prevent some of the light from the source from reaching the other spheres.

In a real scene these spheres are specular and some light is scattered among spheres. But if local model is used, each sphere is shaded independently and hence all appear the same to the viewer.

Local model also cannot produce shadows.These effects can be achieved by more sophisticated (and slower) global rendering techniques, such as:

Ray tracing: It works well for highly specular surfaces. E.g. the scenes composed of highly reflective and translucent objects, such as glass balls.

Radiosity: It is best suited for scenes with perfectly diffuse surfaces, such as interiors of buildings.These two are complementary methods Ray TracingIt is an extension of local lighting model, in many ways. There are many light rays leaving a source. But, only those light rays entering the lens of the synthetic camera and pass through the center of projection contribute to the image. Figure shows a single point source with several of the possible interactions with perfectly specular surfaces.

Rays can enter the lens of the camera directly from the source, from interactions with a surface visible to the camera, after multiple reflections from surfaces, or after transmission through one or more surfaces.Most of the rays that leave a source do not enter the lens and do not contribute to the image. Hence, attempting to follow all rays from a light source is a time wasting endeavor.

Ray cast model: However, the cast rays that contribute to the image can be obtained if the rays are traced in the reverse direction i.e. the rays starting at the center of projection. Consequently, the ray tracing is as shown in figure

below.Here an image plane is ruled into pixel sized areas. A color must be assigned to every pixel. Hence, at least one ray is cast through each pixel. Each cast ray either intersects a surface or a light source, or goes off to infinity without striking anything. Pixels corresponding to this latter case can be assigned a background color. Rays that strike say opaque surfaces need a shading calculation for their point of intersection. Doing so and then using Phong model the image produced will be the same as could be produced by the local renderer.

However, much more can be accomplished with this model.

Comparison of pipeline renderer and ray racer: Pipeline renderer Ray racerThe sequence of steps in the pipeline renderer are:

o Object modelingo Projectiono Visible surface determination.

The sequence in which the calculations are carried out is different.

Works on a vertex-by-vertex basis Works on a pixel-by-pixel basisThe reflection model is immediately applied. Prior to the application of the reflection model, it is

first checked that whether the point of intersection between the cast ray and the surface is illuminated.

www.bookspar.com

www.Bookspar.com | Website for Students | VTU - Notes - Question Papers Page 14 of 14

Other features in ray tracing: Shadow or feeler rays from the point on the surface to each source are computed. If a shadow ray

intersects a surface before it meets the source, the light is blocked from reaching the point under consideration and this point is in shadow, at least from this source. No lighting calculation needs to be done for sources that are blocked from a point on the surface.

If all surfaces are opaque and the light scattered from surface to surface is not considered, an image that has shadows added is obtained. This is what that can be accomplished without ray tracing. But the needed hidden-surface calculation for each point of intersection between a cast ray and a surface is expensive.

The shadow ray for highly reflective surfaces (mirror surface) can be followed as it bounces from surface to surface, until it either goes off to infinity or intersects a source. Such calculations are usually done recursively, and take into account any absorption of light at surfaces.

Ray tracing is particularly good at handling surfaces that are both reflecting and transmitting. A cast ray to a surface can be followed with the property that: if a ray from a source strikes a point, then

the light from the source is partially absorbed, and some of this light contributes to the diffuse reflection term. The rest of the incoming light is divided between a transmitted ray and a reflected ray.

From the perspective of the cast ray, if a light source is visible at the intersection point, then three tasks need to be performed:

1. Contribution from the light source at the point is computed using the standard reflection model. 2. A ray is cast in the direction of a perfect reflection. 3. A ray is cast in the direction of the transmitted ray. These two rays are treated just like the original cast ray; that is, they are intersected (if possible) with

other surfaces, end at a source, or go off to infinity. At each surface that these rays intersect, additional rays generated by reflection are perfectly diffuse. Then the rendering equation to a point can be simplified using a numerical method called radiosity.

Radiosity: It breaks up the scene into small flat polygons, or patches, each of which can be assumed to be perfectly diffuse and renders in a constant shade, as shown:

There are two steps to find the shades needed: 1. Patches are considered pair wise to determine form factors that describe

how the light energy leaving one patch affects the other. 2. Once the form factors are determined, the rendering equation, which starts as

an integral equation, can be reduced to a set of linear equations for the radiosities - essentially the reflectivity-of the facets. Once these equations are solved, the scene can be rendered using any renderer with flat shading. Although the amount of calculation required to compute form factors is enormous - it is an O(n2) problem for n patches - once the patch radiosities

are determined, they are independent of the location of the viewer, due to the assumption that all surfaces are perfectly diffuse. Hence, the scene can be rendered in a walkthrough as fast as it could be rendered using the local lighting model.

Radiosity renders interiors that are composed of say, diffuse reflectors. Even the distributed light sources, if modeled as emissive patches, appear in the rendering.

www.bookspar.com