cs262 – computer vision lect 4 - image formation › ~jmagee › cs262 › slides › cs...2....

15
CS262 – Computer Vision Lect 4 - Image Formation John Magee 25 January, 2017 Slides courtesy of Diane H. Theriault

Upload: others

Post on 31-Jan-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

  • CS262 – Computer VisionLect 4 - Image Formation

    John Magee25 January, 2017

    Slides courtesy of Diane H. Theriault

  • Question of the Day:

    • Why is Computer Vision hard?

  • All this effort to make sure the LIGHTING is good for a movie. Why is more light needed for a good quality movie? What factors affect how much light reaches the film or image sensor? Why does your cell phone take such lousy pictures at a party? How does this all affect Computer Vision?

  • How are images formed1. Light is emitted from a light source2. Light hits a surface3. Light interacts with the surface4. Reflected light enters camera aperture5. Sensor of camera interprets light

    Szeliski Ch 2.2 Don’t worry about all the details of the mathShapiro & Stockman Ch. 6, Ch. 2 (https://courses.cs.washington.edu/courses/cse576/99sp/book.html

    https://courses.cs.washington.edu/courses/cse576/99sp/book.html

  • Light is emitted

    • Point light sources radiates (emits) light uniformly in all directions

    • Properties of light: – Color spectrum (Wavelength distribution) – Intensity (Watts / Area * Solid Angle)

    • Note: A solid angle is like a cone• Note: “Area” light sources, like fluorescent lights, are a little

    different

  • Light hits a surface

    Distance: 5 mOrientation: 0 degrees Solid angle: 16.4 degreesattenuation

    Distance: 2.5 mOrientation: 0 degreesSolid angle: 22.6 degrees

    Distance: ≈2.5 mOrientation: 45 degreesSolid angle: 11.4 degreesforeshortening

    Surface orientation is very important for determining the amount of incident light! The amount of incident light

    that falls on a surface (irradiated light) • size of the surface • solid angle of light

    subtended by the surface depends on

    distance to light and orientation of surface

  • Light Interacts with a surfaceThe orientation of a surface is defined by its “normal vector” which sticks straight up out of the surface.

    Simplified BRDF modeled with two components:

    • “Lambertian”, “flat” or “matte” component : light radiated equally in all directions

    • “Specular”, “shiny”, or “highlight” component: radiated light is reflected across the normal from the incoming light

    Bi-direction reflectance function: “BRDF” expresses :

    – the amount, direction, and color spectrum of reflected light

    depending on– the amount, direction, and color spectrum of

    incoming light

    Some light absorbed due to surface colorWhat happens to the rest?

  • Reflected light enters a camera

    • Red triangle (behind camera) and blue triangle (in front of camera) are similar: therefore:

    • Given any three terms, you can determine the fourth

    Pinhole Model

    focal distance / focal length

    focal plane /image plane

    Scene DepthOptical axis

    Object location

    Center of Projection

    Image location

  • Reflected light enters a camera

    • For given focal length, “Lens Equation leads to

    • A “blur circle” or “circle of confusion” results when projections of objects are not focused on the image plane. The size of the blur circle depends on the distance to the object and the size of the aperture.

    • The allowable size of the blur circle (e.g. a pixel) determines the allowable range of depths in the scene (“depth of field”)

    • Note: The “F number” or “f stop” commonly used in photography is the ratio of focal length to aperture size. (http://www.dofmaster.com/dofjs.html)

    http://www.dofmaster.com/dofjs.html

  • Camera sensor interprets light

    • Image is quantized into pixels to go from physical size of projection to pixel coordinates

    Szeliski 2.3, Shapiro & Stockman 2.2

    http://micro.magnet.fsu.edu/optics/lightandcolor/vision.html

    http://micro.magnet.fsu.edu/optics/lightandcolor/vision.html

  • Now what?• Interaction between light,

    objects, and the camera leads to images

    • The way image values changehopefully tells us something about the objects, the light, and the camera

  • Image Gradients• “the way image values change” image derivative• “Gradient” at a particular point (x, y) is a vector that points in the direction of largest change

    • Gradient can be in Cartesian (x, y) or Polar (magnitude, angle) coordinates• Every point in an image may have a different gradient vector

    Friday’s lab and this week’s homework will be devoted to image gradients and edges.

  • Discussion Questions:• What influences are mixed together when we observe the light reflected from a

    surface?

    • In order to infer surface orientation, what assumptions do we need to make? Can we construct restricted imaging conditions that make this job easier?

    • In order to infer surface properties, what assumptions do we need to make? Can we construct restricted imaging conditions that make this job easier?

    • What are some things we would like to know about objects that we can’t directly observe, even if we could correctly reconstruct surface orientation, color, texture, and reflectance properties? (hint: clothes) What steps could we take to try to understand those things, given the image information

    • Think of some ways that we could define the scope of some tasks that we might be able to do, even if all we have is the image appearance and we can’t infer scene structure and surface orientation and properties.

  • Light incident on a surface

    • The amount of light that falls on a surface (irradiated light) – size of the surface – solid angle of light subtended by the surface

    – Surfaces that are further away from the light subtend a smaller solid angle attenuation

    – Surfaces that are turned away from the light subtend a smaller solid angle foreshortening

  • Image Gradients• The gradient is a vector like any other vector. It just happens to represent

    the way the values of the image are changing.• One way to compute gradient: “finite differences”:

    Just compute the difference between each pixel and the previous one (horizontally and vertically).

    • Switching from the Cartesian representation (x,y) to the polar representation (magnitude, direction) is often helpful, and very, very important.

    Friday’s lab and this week’s homework will be devoted to image gradients and edges.

    CS262 – Computer Vision�Lect 4 - Image FormationQuestion of the Day:Slide Number 3How are images formedLight is emittedLight hits a surfaceLight Interacts with a surfaceReflected light enters a cameraReflected light enters a cameraCamera sensor interprets lightNow what?Image GradientsDiscussion Questions:Light incident on a surfaceImage Gradients