courses - spiespie.org/documents/conferencesexhibitions/ei15-courses-l.pdf · conferences and...

52
Conferences and Courses 8–12 February 2015 Location Hilton San Francisco, Union Square San Francisco, California, USA REGISTER TODAY www.electronicimaging.org Technologies - 3D Imaging, Interaction, and Metrology - Visualization, Perception, and Color - Image Processing - Image Capture - Computer Vision - Media Processing and Communication - Mobile Imaging COURSES 50% off all courses for student members

Upload: buitram

Post on 19-Mar-2018

215 views

Category:

Documents


1 download

TRANSCRIPT

Conferences and Courses8–12 February 2015

Location Hilton San Francisco, Union SquareSan Francisco, California, USA

RegisteR today

www.electronicimaging.org

technologies- 3D Imaging, Interaction, and Metrology

- Visualization, Perception, and Color

- Image Processing

- Image Capture

- Computer Vision

- Media Processing and Communication

- Mobile Imaging

COURSES50% off all courses for

student members

2 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

Register today___www.electronicimaging.org

electronicimaging

is&t/sPieNasir d. Memon, Polytechnic Institute of New

York Univ. (USA)Kurt s. Niel, Upper Austria Univ. of Applied

Sciences (Austria)Maria V. ortiz segovia, Océ Print Logic

Technologies (France)thrasyvoulos N. Pappas, Northwestern Univ.

(USA)William Puech, Lab. d’Informatique de Robotique

et de Microelectronique de Montpellier (France)eric K. Ringger, Brigham Young Univ. (USA)alessandro Rizzi, Univ. degli Studi di Milano

(Italy)ian Roberts, Pacific Northwest National Lab.

(USA)Bernice e. Rogowitz, Visual Perspectives

Research and Consulting (USA)Juha Röning, Univ. of Oulu (Finland)eli saber, Rochester Institute of Technology

(USA)amir said, LG Electronics MobileComm U.S.A.,

Inc. (USA)Nitin sampat, Rochester Institute of Technology

(USA)Ken d. sauer, Univ. of Notre Dame (USA)Christopher d. shaw, Simon Fraser Univ.

(Canada)Robert sitnik, Warsaw Univ. of Technology

(Poland)Robert L. stevenson, Univ. of Notre Dame (USA)Radka tezaur, Nikon Research Corp. of America

(USA)sophie triantaphillidou, Univ. of Westminster

(United Kingdom)Philipp Urban, Fraunhofer-Institut für Graphische

Datenverarbeitung (Germany)Ralf Widenhorn, Portland State Univ. (USA)thomas Wischgoll, Wright State Univ. (USA)andrew J. Woods, Curtin Univ. (Australia)dietmar Wüller, Image Engineering GmbH & Co.

KG (Germany)

2015 Symposium Steering Committee:

sheila s. Hemami, Symposium Chair, Northeastern Univ. (USA)

Choon-Woo Kim, Symposium Co-Chair, Inha Univ. (Korea, Republic of)

Majid Rabbani, Eastman Kodak Co. (USA)

andrew J. Woods, Curtin Univ. (Australia)

sergio R. goma, Qualcomm Inc. (USA)

Kevin J. Matherson, Microsoft Corp. (USA)

Joyce e. Farrell, Stanford Univ. (USA)

suzanne e. grinnan, IS&T Executive Director (USA)

Rob Whitner, SPIE Event Manager (USA)

2015 Technical Committee:sos s. agaian, The Univ. of Texas at San Antonio

(USA)david akopian, The Univ. of Texas at San Antonio

(USA)adnan M. alattar, Digimarc Corp. (USA)Jan P. allebach, Purdue Univ. (USA)sebastiano Battiato, Univ. degli Studi di Catania

(Italy)e. Wes Bethel, Lawrence Berkeley National Lab.

(USA)Charles a. Bouman, Purdue Univ. (USA)Matthias F. Carlsohn, Computer Vision and

Image Communication at Bremen (Germany)david Casasent, Carnegie Mellon Univ. (USA)Reiner Creutzburg, Fachhochschule

Brandenburg (Germany)Huib de Ridder, Technische Univ. Delft

(Netherlands)Margaret dolinsky, Indiana Univ. (USA)antoine dupret, Commissariat à l’Énergie

Atomique (France)Karen o. egiazarian, Tampere Univ. of

Technology (Finland)Reiner eschbach, Xerox Corp. (USA)Zhigang Fan, SKR Labs (USA)Joyce e. Farrell, Stanford Univ. (USA)gregg e. Favalora, VisionScope Technologies

LLC (USA)Boyd a. Fowler, Google (USA)atanas P. gotchev, Tampere Univ. of Technology

(Finland)onur g. guleryuz, LG Electronics MobileComm

U.S.A., Inc. (USA)Ming C. Hao, Hewlett-Packard Labs. (USA)Chad d. Heitzenrater, Air Force Research Lab.

(USA)Nicolas s. Holliman, The Univ. of York (United

Kingdom)Francisco H. imai, Canon U.S.A., Inc. (USA)alark Joshi, Univ. of San Francisco (USA)david L. Kao, NASA Ames Research Ctr. (USA)Nasser Kehtarnavaz, The Univ. of Texas at

Dallas (USA)edmund y. Lam, The Univ. of Hong Kong (Hong

Kong, China)Bart Lamiroy, Univ. de Lorraine (France)Mohamed-Chaker Larabi, Univ. de Poitiers

(France)Qian Lin, Hewlett-Packard Co. (USA)Mark a. Livingston, U.S. Naval Research Lab.

(USA)Robert P. Loce, Xerox Corp. (USA)andrew Lumsdaine, Indiana Univ. (USA)gabriel g. Marcu, Apple Inc. (USA)Kevin J. Matherson, Microsoft Corp. (USA)ian e. Mcdowall, Fakespace Labs, Inc. (USA)

Join us in celebrating

www.spie.org/IYL

2015 Symposium Chair

sheila s. Hemami

Northeastern Univ. (USA)

2015 Symposium Co-Chair

Choon-Woo Kim

Inha Univ. (Republic of Korea)

2015 Short Course Chair

Majid Rabbani

Eastman Kodak Co. (USA)

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 3

Course index3D Imaging, Interaction, and MetrologySun SC468 image enhancement, deblurring and super-

Resolution (Rabbani) 8:30 am to 5:30 pm, $525 / $635. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 9

Sun SC1154 introduction to digital Color imaging (Sharma) 8:30 am to 12:30 pm, $300 / $355 . . . . . . . . . . . .p. 11

Sun SC969 Perception, Cognition, and Next generation imaging (Rogowitz) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 11

Sun SC060 stereoscopic display application issues (Merritt, Woods) 8:30 am to 5:30 pm, $525 / $635 . . . . . . .p. 9

Tue SC927 3d imaging (Agam) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 9

Tue SC1015 Understanding and interpreting images (Rabbani) 1:30 pm to 5:30 pm, $300 / $355 . . . . . . . . . . . . .p. 10

Computer VisionSun SC1157 Camera Characterization and Camera Models

(Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 14

Sun SC468 image enhancement, deblurring and super-Resolution (Rabbani) 8:30 am to 5:30 pm, $525 / $635. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 35

Sun SC969 Perception, Cognition, and Next generation imaging (Rogowitz) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 37

Mon SC1049 Benchmarking image Quality of still and Video imaging systems (Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635 . . . . . . . . . . . . . . . . p. 34

Mon SC965 Joint design of optics and image Processing for imaging systems (Stork) 1:30 pm to 5:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 25

Tue SC807 digital Camera and scanner Performance evaluation: standards and Measurement (Burns, Williams) 8:30 am to 12:30 pm, $300 / $355 . . . .p. 18

Tue SC1015 Understanding and interpreting images (Rabbani) 1:30 pm to 5:30 pm, $300 / $355 . . . . . . . . . . . . p. 33

Wed SC812 Perceptual Metrics for image and Video Quality in a Broader Context: From Perceptual transparency to structural equivalence (Pappas, Hemami) 1:30 pm to 5:30 pm, $300 / $355 . . . . p. 35

Image CaptureSun SC1157 Camera Characterization and Camera Models

(Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 14

Sun SC967 High dynamic Range imaging: sensors and architectures (Darmont) 8:30 am to 5:30 pm, $570 / $680. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 27

Sun SC468 image enhancement, deblurring and super-Resolution (Rabbani) 8:30 am to 5:30 pm, $525 / $635. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 31

Sun SC1058 image Quality and evaluation of Cameras in Mobile devices (Matherson, Artmann) 8:30 am to 5:30 pm, $525 / $635 . . . . . . . . . . . . . . . . . . . . . p. 30

Sun SC1154 introduction to digital Color imaging (Sharma) 8:30 am to 12:30 pm, $300 / $355 . . . . . . . . . . . p. 32

Sun SC969 Perception, Cognition, and Next generation imaging (Rogowitz) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 32

Sun SC980 theory and Methods of Lightfield Photography (Georgiev, Lumsdaine) 8:30 am to 5:30 pm, $525 / $635. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 27

Sun SC1048 Recent trends in imaging devices (Battiato, Farinella) 1:30 pm to 5:30 pm, $300 / $355 . . . . p. 29

Mon SC1097 HdR imaging in Cameras, displays and Human Vision (Rizzi, McCann) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 28

Mon SC1049 Benchmarking image Quality of still and Video imaging systems (Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635 . . . . . . . . . . . . . . . . .p. 26

Mon SC965 Joint design of optics and image Processing for imaging systems (Stork) 1:30 pm to 5:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 25

Tue SC807 digital Camera and scanner Performance evaluation: standards and Measurement (Burns, Williams) 8:30 am to 12:30 pm, $300 / $355 . . . .p. 18

Tue SC1015 Understanding and interpreting images (Rabbani) 1:30 pm to 5:30 pm, $300 / $355 . . . . . . . . . . . . p. 32

Wed SC812 Perceptual Metrics for image and Video Quality in a Broader Context: From Perceptual transparency to structural equivalence (Pappas, Hemami) 1:30 pm to 5:30 pm, $300 / $355 . . . . p. 30

Image ProcessingSun SC967 High dynamic Range imaging: sensors and

architectures (Darmont) 8:30 am to 5:30 pm, $570 / $680. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 22

Sun SC468 image enhancement, deblurring and super-Resolution (Rabbani) 8:30 am to 5:30 pm, $525 / $635. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 19

Sun SC1058 image Quality and evaluation of Cameras in Mobile devices (Matherson, Artmann) 8:30 am to 5:30 pm, $525 / $635 . . . . . . . . . . . . . . . . . . . . . p. 22

Sun SC1154 introduction to digital Color imaging (Sharma) 8:30 am to 12:30 pm, $300 / $355 . . . . . . . . . . . .p. 20

Sun SC969 Perception, Cognition, and Next generation imaging (Rogowitz) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 25

Sun SC060 stereoscopic display application issues (Merritt, Woods) 8:30 am to 5:30 pm, $525 / $635 . . . . . .p. 24

Sun SC980 theory and Methods of Lightfield Photography (Georgiev, Lumsdaine) 8:30 am to 5:30 pm, $525 / $635. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 23

Sun SC1048 Recent trends in imaging devices (Battiato, Farinella) 1:30 pm to 5:30 pm, $300 / $355 . . . . .p. 20

Mon SC1097 HdR imaging in Cameras, displays and Human Vision (Rizzi, McCann) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 23

Mon SC965 Joint design of optics and image Processing for imaging systems (Stork) 1:30 pm to 5:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 25

Tue SC927 3d imaging (Agam) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 24

Tue SC807 digital Camera and scanner Performance evaluation: standards and Measurement (Burns, Williams) 8:30 am to 12:30 pm, $300 / $355 . . . .p. 18

Tue SC1015 Understanding and interpreting images (Rabbani) 1:30 pm to 5:30 pm, $300 / $355 . . . . . . . . . . . . .p. 19

Wed SC812 Perceptual Metrics for image and Video Quality in a Broader Context: From Perceptual transparency to structural equivalence (Pappas, Hemami) 1:30 pm to 5:30 pm, $300 / $355 . . . . .p. 21

4 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

Course indexMedia Processing and CommunicationSun SC1157 Camera Characterization and Camera Models

(Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 14

Sun SC967 High dynamic Range imaging: sensors and architectures (Darmont) 8:30 am to 5:30 pm, $570 / $680. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 39

Sun SC468 image enhancement, deblurring and super-Resolution (Rabbani) 8:30 am to 5:30 pm, $525 / $635. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 40

Sun SC1058 image Quality and evaluation of Cameras in Mobile devices (Matherson, Artmann) 8:30 am to 5:30 pm, $525 / $635 . . . . . . . . . . . . . . . . . . . . . p. 38

Sun SC969 Perception, Cognition, and Next generation imaging (Rogowitz) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 43

Sun SC060 stereoscopic display application issues (Merritt, Woods) 8:30 am to 5:30 pm, $525 / $635 . . . . . p. 43

Sun SC1048 Recent trends in imaging devices (Battiato, Farinella) 1:30 pm to 5:30 pm, $300 / $355 . . . . .p. 37

Mon SC1097 HdR imaging in Cameras, displays and Human Vision (Rizzi, McCann) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 39

Mon SC1049 Benchmarking image Quality of still and Video imaging systems (Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635 . . . . . . . . . . . . . . . . .p. 41

Tue SC927 3d imaging (Agam) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 43

Tue SC1015 Understanding and interpreting images (Rabbani) 1:30 pm to 5:30 pm, $300 / $355 . . . . . . . . . . . . p. 40

Wed SC812 Perceptual Metrics for image and Video Quality in a Broader Context: From Perceptual transparency to structural equivalence (Pappas, Hemami) 1:30 pm to 5:30 pm, $300 / $355 . . . . .p. 42

Mobile ImagingSun SC1157 Camera Characterization and Camera Models

(Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 14

Sun SC967 High dynamic Range imaging: sensors and architectures (Darmont) 8:30 am to 5:30 pm, $570 / $680. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 45

Sun SC468 image enhancement, deblurring and super-Resolution (Rabbani) 8:30 am to 5:30 pm, $525 / $635. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 45

Sun SC1058 image Quality and evaluation of Cameras in Mobile devices (Matherson, Artmann) 8:30 am to 5:30 pm, $525 / $635 . . . . . . . . . . . . . . . . . . . . . p. 44

Sun SC1154 introduction to digital Color imaging (Sharma) 8:30 am to 12:30 pm, $300 / $355 . . . . . . . . . . . p. 49

Sun SC969 Perception, Cognition, and Next generation imaging (Rogowitz) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 48

Mon SC1097 HdR imaging in Cameras, displays and Human Vision (Rizzi, McCann) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 44

Mon SC1049 Benchmarking image Quality of still and Video imaging systems (Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635 . . . . . . . . . . . . . . . . p. 48

Tue SC1015 Understanding and interpreting images (Rabbani) 1:30 pm to 5:30 pm, $300 / $355 . . . . . . . . . . . . p. 46

Wed SC812 Perceptual Metrics for image and Video Quality in a Broader Context: From Perceptual transparency to structural equivalence (Pappas, Hemami) 1:30 pm to 5:30 pm, $300 / $355 . . . . p. 46

Visualization, Perception, and ColorSun SC1157 Camera Characterization and Camera Models

(Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 14

Sun SC967 High dynamic Range imaging: sensors and architectures (Darmont) 8:30 am to 5:30 pm, $570 / $680. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 16

Sun SC468 image enhancement, deblurring and super-Resolution (Rabbani) 8:30 am to 5:30 pm, $525 / $635. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 15

Sun SC1058 image Quality and evaluation of Cameras in Mobile devices (Matherson, Artmann) 8:30 am to 5:30 pm, $525 / $635 . . . . . . . . . . . . . . . . . . . . . .p. 16

Sun SC1154 introduction to digital Color imaging (Sharma) 8:30 am to 12:30 pm, $300 / $355 . . . . . . . . . . . .p. 12

Sun SC969 Perception, Cognition, and Next generation imaging (Rogowitz) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 12

Sun SC060 stereoscopic display application issues (Merritt, Woods) 8:30 am to 5:30 pm, $525 / $635 . . . . . .p. 17

Sun SC1048 Recent trends in imaging devices (Battiato, Farinella) 1:30 pm to 5:30 pm, $300 / $355 . . . . .p. 17

Mon SC1097 HdR imaging in Cameras, displays and Human Vision (Rizzi, McCann) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 13

Mon SC1049 Benchmarking image Quality of still and Video imaging systems (Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635 . . . . . . . . . . . . . . . . .p. 14

Tue SC927 3d imaging (Agam) 8:30 am to 12:30 pm, $300 / $355. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .p. 18

Tue SC807 digital Camera and scanner Performance evaluation: standards and Measurement (Burns, Williams) 8:30 am to 12:30 pm, $300 / $355 . . . .p. 18

Tue SC1015 Understanding and interpreting images (Rabbani) 1:30 pm to 5:30 pm, $300 / $355 . . . . . . . . . . . . .p. 15

Wed SC812 Perceptual Metrics for image and Video Quality in a Broader Context: From Perceptual transparency to structural equivalence (Pappas, Hemami) 1:30 pm to 5:30 pm, $300 / $355 . . . . .p. 12

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 5

Money-back guaranteeWe are confident that once you experience an IS&T/SPIE course for yourself you will look to us for your future education needs. However, if for any reason you are dissatisfied, we will gladly refund your money. We just ask that you tell us what you did not like; suggestions for improvement are always welcome.

Continuing education UnitsIS&T/SPIE has been approved as an authorized provider of CEUs by IACET, The International Association for Continuing Education and Training (Provider #1002091). In obtaining this approval, IS&T/SPIE has demonstrated that it complies with the ANSI/IACET Standards which are widely recognized as standards of good practice.

is&t/sPie reserves the right to cancel a course due to insufficient advance registration.www.spie.org/education

short Courses at electronic imaging

short Courses__

Relevant training | Proven instructors | Education you need to stay competitive in today’s job market

• 16 Short Courses in fundamental and current topics in electronic imaging including color imaging, camera and digital image capture & evaluation, stereoscopic displays, mobile imaging, and more.

• Short Course attendees receive CEUs to fulfill continuing education requirements

• Full-time students receive 50% off courses

• All-new and featured courses for 2015 include

- Introduction to Color Imaging

- Camera Characterization and Camera Models

- Recent Trends in Imaging Devices

6 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

sUNday | MoNday | tUesday | WedNesday | tHURsday

short Course daily schedule

3D Imaging, Interaction, and MetrologySC468 image enhancement, deblurring and super-Resolution (Rabbani) 8:30 am to 5:30 pm, $525 / $635, p. 9

SC1154 introduction to digital Color imaging (Sharma) 8:30 am to 12:30 pm, $300 / $355, p. 11

SC969 Perception, Cognition, and Next generation imaging (Rogowitz) 8:30 am to 12:30 pm, $300 / $355, p. 11

SC060 stereoscopic display application issues (Merritt, Woods) 8:30 am to 5:30 pm, $525 / $635, p. 9

SC927 3d imaging (Agam) 8:30 am to 12:30 pm, $300 / $355, p. 9

SC1015 Understanding and interpreting images (Rabbani) 1:30 pm to 5:30 pm, $300 / $355, p. 10

Computer VisionSC468 image enhancement, deblurring and super-Resolution (Rabbani) 8:30 am to 5:30 pm, $525 / $635, p. 35

SC969 Perception, Cognition, and Next generation imaging (Rogowitz) 8:30 am to 12:30 pm, $300 / $355, p. 37

SC1157 Camera Characterization and Camera Models (Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635, p. 14

SC1049 Benchmarking image Quality of still and Video imaging systems (Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635, p. 34

SC965 Joint design of optics and image Processing for imaging systems, (Stork) 1:30 pm to 5:30 pm, $300 / $355, p. 25

SC807 digital Camera and scanner Performance evaluation: standards and Measurement, (Burns, Williams), 8:30 am to 12:30 pm, $300 / $355, p. 18

SC1015 Understanding and interpreting images (Rabbani) 1:30 pm to 5:30 pm, $300 / $355, p. 33

Image CaptureSC967 High dynamic Range imaging: sensors and architectures (Darmont) 8:30 am to 5:30 pm, $570 / $680, p. 27

SC468 image enhancement, deblurring and super-Resolution (Rabbani) 8:30 am to 5:30 pm, $525 / $635, p. 31

SC1058 image Quality and evaluation of Cameras in Mobile devices (Matherson, Artmann) 8:30 am to 5:30 pm, $525 / $635, p. 30

SC1154 introduction to digital Color imaging (Sharma) 8:30 am to 12:30 pm, $300 / $355, p. 32

SC969 Perception, Cognition, and Next generation imaging (Rogowitz) 8:30 am to 12:30 pm, $300 / $355, p. 32

SC980 theory and Methods of Lightfield Photography (Georgiev, Lumsdaine) 8:30 am to 5:30 pm, $525 / $635, p. 27

SC1048 Recent trends in imaging devices (Battiato, Farinella) 1:30 pm to 5:30 pm, $300 / $355, p. 29

SC1157 Camera Characterization and Camera Models (Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635, p. 14

SC1097 HdR imaging in Cameras, displays and Human Vision (Rizzi, McCann) 8:30 am to 12:30 pm, $300 / $355, p. 28

SC1049 Benchmarking image Quality of still and Video imaging systems (Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635, p. 26

SC965 Joint design of optics and image Processing for imaging systems, (Stork) 1:30 pm to 5:30 pm, $300 / $355, p. 25

SC807 digital Camera and scanner Performance evaluation: standards and Measurement, (Burns, Williams), 8:30 am to 12:30 pm, $300 / $355, p. 18

SC1015 Understanding and interpreting images (Rabbani) 1:30 pm to 5:30 pm, $300 / $355, p. 32

SC812 Perceptual Metrics for image and Video Quality in a Broader Context: From Perceptual transparency to structural equivalence (Pappas, Hemami) 1:30 pm to 5:30 pm, $300 / $355, p. 35

SC812 Perceptual Metrics for image and Video Quality in a Broader Context: From Perceptual transparency to structural equivalence (Pappas, Hemami) 1:30 pm to 5:30 pm, $300 / $355, p. 30

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 7

sUNday | MoNday | tUesday | WedNesday | tHURsday

short Course daily schedule

Image ProcessingSC967 High dynamic Range imaging: sensors and architectures (Darmont) 8:30 am to 5:30 pm, $570 / $680, p. 22

SC468 image enhancement, deblurring and super-Resolution (Rabbani) 8:30 am to 5:30 pm, $525 / $635, p. 19

SC1058 image Quality and evaluation of Cameras in Mobile devices (Matherson, Artmann) 8:30 am to 5:30 pm, $525 / $635, p. 22

SC1154 introduction to digital Color imaging (Sharma) 8:30 am to 12:30 pm, $300 / $355, p. 20

SC969 Perception, Cognition, and Next generation imaging (Rogowitz) 8:30 am to 12:30 pm, $300 / $355, p. 25

SC060 stereoscopic display application issues (Merritt, Woods) 8:30 am to 5:30 pm, $525 / $635, p. 24

SC980 theory and Methods of Lightfield Photography (Georgiev, Lumsdaine) 8:30 am to 5:30 pm, $525 / $635, p. 23

SC1048 Recent trends in imaging devices (Battiato, Farinella) 1:30 pm to 5:30 pm, $300 / $355, p. 20

SC1097 HdR imaging in Cameras, displays and Human Vision (Rizzi, McCann) 8:30 am to 12:30 pm, $300 / $355, p. 23

SC965 Joint design of optics and image Processing for imaging systems, (Stork) 1:30 pm to 5:30 pm, $300 / $355, p. 25

SC927 3d imaging (Agam) 8:30 am to 12:30 pm, $300 / $355, p. 24

SC1015 Understanding and interpreting images (Rabbani) 1:30 pm to 5:30 pm, $300 / $355, p. 19

SC807 digital Camera and scanner Performance evaluation: standards and Measurement, (Burns, Williams), 8:30 am to 12:30 pm, $300 / $355, p. 18

Media Processing and CommunicationSC967 High dynamic Range imaging: sensors and architectures (Darmont) 8:30 am to 5:30 pm, $570 / $680, p. 39

SC468 image enhancement, deblurring and super-Resolution (Rabbani) 8:30 am to 5:30 pm, $525 / $635, p. 40

SC1058 image Quality and evaluation of Cameras in Mobile devices (Matherson, Artmann) 8:30 am to 5:30 pm, $525 / $635, p. 38

SC969 Perception, Cognition, and Next generation imaging (Rogowitz) 8:30 am to 12:30 pm, $300 / $355, p. 43

SC060 stereoscopic display application issues (Merritt, Woods) 8:30 am to 5:30 pm, $525 / $635, p. 43

SC1048 Recent trends in imaging devices (Battiato, Farinella) 1:30 pm to 5:30 pm, $300 / $355, p. 37

SC1157 Camera Characterization and Camera Models (Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635, p. 14

SC1097 HdR imaging in Cameras, displays and Human Vision (Rizzi, McCann) 8:30 am to 12:30 pm, $300 / $355, p. 39

SC1049 Benchmarking image Quality of still and Video imaging systems (Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635, p. 41

SC927 3d imaging (Agam) 8:30 am to 12:30 pm, $300 / $355, p. 43

SC1015 Understanding and interpreting images (Rabbani) 1:30 pm to 5:30 pm, $300 / $355, p. 40

SC812 Perceptual Metrics for image and Video Quality in a Broader Context: From Perceptual transparency to structural equivalence (Pappas, Hemami) 1:30 pm to 5:30 pm, $300 / $355, p. 21

SC812 Perceptual Metrics for image and Video Quality in a Broader Context: From Perceptual transparency to structural equivalence (Pappas, Hemami) 1:30 pm to 5:30 pm, $300 / $355, p. 42

8 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

Visualization, Perception, and ColorSC967 High dynamic Range imaging: sensors and architectures (Darmont) 8:30 am to 5:30 pm, $570 / $680, p. 16

SC468 image enhancement, deblurring and super-Resolution (Rabbani) 8:30 am to 5:30 pm, $525 / $635, p. 15

SC1058 image Quality and evaluation of Cameras in Mobile devices (Matherson, Artmann) 8:30 am to 5:30 pm, $525 / $635, p. 16

SC1154 introduction to digital Color imaging (Sharma) 8:30 am to 12:30 pm, $300 / $355, p. 12

SC969 Perception, Cognition, and Next generation imaging (Rogowitz) 8:30 am to 12:30 pm, $300 / $355, p. 12

SC060 stereoscopic display application issues (Merritt, Woods) 8:30 am to 5:30 pm, $525 / $635, p. 17

SC1048 Recent trends in imaging devices (Battiato, Farinella) 1:30 pm to 5:30 pm, $300 / $355, p. 17

SC1157 Camera Characterization and Camera Models (Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635, p. 14

SC1097 HdR imaging in Cameras, displays and Human Vision (Rizzi, McCann) 8:30 am to 12:30 pm, $300 / $355, p. 13

SC1049 Benchmarking image Quality of still and Video imaging systems (Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635, p. 14

SC927 3d imaging (Agam) 8:30 am to 12:30 pm, $300 / $355, p. 18

SC1015 Understanding and interpreting images (Rabbani) 1:30 pm to 5:30 pm, $300 / $355, p. 15

SC807 digital Camera and scanner Performance evaluation: standards and Measurement, (Burns, Williams), 8:30 am to 12:30 pm, $300 / $355, p. 18

sUNday | MoNday | tUesday | WedNesday | tHURsday

short Course daily schedule

Mobile ImagingSC967 High dynamic Range imaging: sensors and architectures (Darmont) 8:30 am to 5:30 pm, $570 / $680, p. 45

SC468 image enhancement, deblurring and super-Resolution (Rabbani) 8:30 am to 5:30 pm, $525 / $635, p. 45

SC1058 image Quality and evaluation of Cameras in Mobile devices (Matherson, Artmann) 8:30 am to 5:30 pm, $525 / $635, p. 44

SC1154 introduction to digital Color imaging (Sharma) 8:30 am to 12:30 pm, $300 / $355, p. 49

SC969 Perception, Cognition, and Next generation imaging (Rogowitz) 8:30 am to 12:30 pm, $300 / $355, p. 48

SC1157 Camera Characterization and Camera Models (Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635, p. 14

SC1097 HdR imaging in Cameras, displays and Human Vision (Rizzi, McCann) 8:30 am to 12:30 pm, $300 / $355, p. 44

SC1049 Benchmarking image Quality of still and Video imaging systems (Phillips, Hornung, Denman) 8:30 am to 5:30 pm, $525 / $635, p. 48

SC1015 Understanding and interpreting images (Rabbani) 1:30 pm to 5:30 pm, $300 / $355, p. 46

SC812 Perceptual Metrics for image and Video Quality in a Broader Context: From Perceptual transparency to structural equivalence (Pappas, Hemami) 1:30 pm to 5:30 pm, $300 / $355, p. 46

SC812 Perceptual Metrics for image and Video Quality in a Broader Context: From Perceptual transparency to structural equivalence (Pappas, Hemami) 1:30 pm to 5:30 pm, $300 / $355, p. 12

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 9

short Courses

3D Imaging, Interaction, and Metrology

Stereoscopic Display Application Issues

SC060Course Level: intermediateCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

When correctly implemented, stereoscopic 3D displays can provide significant benefits in many areas, including endoscopy and other medical imaging, teleoperated vehicles and telemanipulators, CAD, molecular modeling, 3D computer graphics, 3D visualization, photo interpretation, video-based training, and entertainment. This course conveys a concrete understanding of basic principles and pitfalls that should be considered when setting up stereoscopic systems and producing stereoscopic content. The course will demonstrate a range of stereoscopic hardware and 3D imaging & display principles, outline the key issues in an ortho-stereoscopic video display setup, and show 3D video from a wide variety of applied stereoscopic imaging systems.

LEARNING OUTCOMESThis course will enable you to:• list critical human factors guidelines for stereoscopic display

configuration and implementation • calculate optimal camera focal length, separation, display size, and

viewing distance to achieve a desired level of depth acuity • examine comfort limits for focus/fixation mismatch and on-

screen parallax values as a function of focal length, separation, convergence, display size, and viewing-distance factors

• set up a large-screen stereo display system using AV equipment readily available at most conference sites, for 3D stills and for full-motion 3D video

• rank the often-overlooked side-benefits of stereoscopic displays that should be included in a cost/benefit analysis for proposed 3D applications

• explain common pitfalls in designing tests to compare 2D vs. 3D displays

• calculate and demonstrate the distortions in perceived 3D space due to camera and display parameters

• design and set up an ortho-stereoscopic 3D imaging/display system • understand the projective geometry involved in stereoscopic

modeling • determine the problems, and the solutions, for converting

stereoscopic video across video standards such as NTSC and PAL • work with stereoscopic 3D video and stills -using analog and digital

methods of capture/filming, encoding, storage, format conversion, display, and publishing

• describe the trade-offs among currently available stereoscopic display system technologies and determine which will best match a particular application

• understand existing and developing stereoscopic standards

INTENDED AUDIENCEThis course is designed for engineers, scientists, and program managers who are using, or considering using, stereoscopic 3D displays in their applications. The solid background in stereoscopic system fundamentals, along with many examples of advanced 3D display applications, makes this course highly useful both for those who are new to stereoscopic 3D and also for those who want to advance their current understanding and utilization of stereoscopic systems.

INSTRUCTORJohn Merritt is a 3D display systems consultant at The Merritt Group, Williamsburg, MA, USA with more than 25 years experience in the design and human-factors evaluation of stereoscopic video displays for telepresence and telerobotics, off-road mobility, unmanned vehicles, night vision devices, photo interpretation, scientific visualization, and medical imaging.

andrew Woods is a research engineer at Curtin University’s Centre for Marine Science and Technology in Perth, Western Australia. He has over 20 years of experience working on the design, application, and evaluation of stereoscopic technologies for industrial and entertainment applications.

3D Imaging

SC927Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd tuesday 8:30 am to 12:30 pm

The purpose of this course is to introduce algorithms for 3D structure inference from 2D images. In many applications, inferring 3D structure from 2D images can provide crucial sensing information. The course will begin by reviewing geometric image formation and mathematical concepts that are used to describe it, and then move to discuss algorithms for 3D model reconstruction.The problem of 3D model reconstruction is an inverse problem in which we need to infer 3D information based on incomplete (2D) observations. We will discuss reconstruction algorithms which utilize information from multiple views. Reconstruction requires the knowledge of some intrinsic and extrinsic camera parameters, and the establishment of correspondence between views. We will discuss algorithms for determining camera parameters (camera calibration) and for obtaining correspondence using epipolar constraints between views. The course will also introduce relevant 3D imaging software components available through the industry standard OpenCV library.

LEARNING OUTCOMESThis course will enable you to:• describe fundamental concepts in 3D imaging• develop algorithms for 3D model reconstruction from 2D images• incorporate camera calibration into your reconstructions• classify the limitations of reconstruction techniques• use industry standard tools for developing 3D imaging applications

INTENDED AUDIENCEEngineers, researchers, and software developers, who develop imaging applications and/or use camera sensors for inspection, control, and analysis. The course assumes basic working knowledge concerning matrices and vectors.

INSTRUCTORgady agam is an Associate Professor of Computer Science at the Illinois Institute of Technology. He is the director of the Visual Computing Lab at IIT which focuses on imaging, geometric modeling, and graphics applications. He received his PhD degree from Ben-Gurion University in 1999.

Image Enhancement, Deblurring and Super-Resolution

SC468Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

This course discusses some of the advanced algorithms in the field of digital image processing. In particular, it familiarizes the audience with the understanding, design, and implementation of advanced algorithms used in deblurring, contrast enhancement, sharpening, noise reduction, and super-resolution in still images and video. Some of the applications include medical imaging, entertainment imaging, consumer and professional digital still cameras/camcorders, forensic imaging, and surveillance. Many image examples complement the technical descriptions.

10 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

LEARNING OUTCOMESThis course will enable you to:• explain the various nonadaptive and adaptive techniques used

in image contrast enhancement. Examples include PhotoShop commands such as Brightness/Contrast, Auto Levels, Equalize and Shadow/Highlights, or Pizer’s technique and Moroney’s approach

• explain the fundamental techniques used in image Dynamic Range Compression (DRC).Illustrate using the fast bilateral filtering by Dorsey and Durand as an example.

• explain the various techniques used in image noise removal, such as bilateral filtering, sigma filtering and K-Nearest Neighbor

• explain the various techniques used in image sharpening such as nonlinear unsharp masking, etc.

• explain the basic techniques used in image deblurring (restoration) such as inverse filtering and Wiener filtering

• explain the fundamental ideas behind achieving image super-resolution from multiple lower resolution images of the same scene

• explain how motion information can be utilized in image sequences to improve the performance of various enhancement techniques such as noise removal, sharpening, and super-resolution

INTENDED AUDIENCEScientists, engineers, and managers who need to understand and/or apply the techniques employed in digital image processing in various products in a diverse set of applications such as medical imaging, professional and consumer imaging, forensic imaging, etc. Prior knowledge of digital filtering (convolution) is necessary for understanding the (Wiener filtering and inverse filtering) concepts used in deblurring (about 20% of the course content).

INSTRUCTORMajid Rabbani has 30+ years of experience in digital imaging. He is an Eastman Fellow at Kodak and an adjunct faculty at both RIT and University of Rochester. He is the co-recipient of the 2005 and 1988 Kodak Mees Awards and the co-recipient of two Emmy Engineering Awards for his contributions to digital imaging. He is the co-author of the 1991 book “Digital Image Compression Techniques” and the creator of six video/CDROM courses in the area of digital imaging. In 2012 he received the Electronic Imaging Distinguished Educator Award from SPIE and IS&T for 25 years of educational service to the electronic imaging community. He is a Fellow of SPIE, a Fellow of IEEE, and a Kodak Distinguished Inventor.

Understanding and Interpreting Images

SC1015Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd tuesday 1:30 pm to 5:30 pm

A key problem in computer vision is image and video understanding, which can be defined as the task of recognizing objects in the scene and their corresponding relationships and semantics, in addition to identifying the scene category itself. Image understanding technology has numerous applications among which are smart capture devices, intelligent image processing, semantic image search and retrieval, image/video utilization (e.g., ratings on quality, usefulness, etc.), security and surveillance, intelligent asset selection and targeted advertising.

This tutorial provides an introduction to the theory and practice of image understanding algorithms by studying the various technologies that serve the three major components of a generalized IU system, namely, feature extraction and selection, machine learning tools used for classification, and datasets and ground truth used for training the classifiers. Following this general development, a few application examples are studied in more detail to gain insight into how these technologies are employed in a practical IU system. Applications include face detection, sky detection, image orientation detection, main subject detection, and content based image retrieval (CBIR). Furthermore, realtime demos including face detection and recognition, CBIR, and automatic zooming and cropping of images based on main-subject detection are provided.

LEARNING OUTCOMESThis course will enable you to:• learn the various applications of IU and the scope of its consumer

and commercial uses• explain the various technologies used in image feature extraction

such as global, block-based or region-based color histograms and moments, the “tiny” image, GIST, histogram of oriented gradients (HOG), scale-invariant feature transform (SIFT), speeded-up robust features (SURF), bag of words, etc.

• explain the various machine learning paradigms and the fundamental techniques used for classification such as Bayesian classifiers, linear support vector machines (SVM) and nonlinear kernels, boosting techniques (e.g., AdaBoost), k-nearest neighbors, .etc.

• explain the concepts used for classifier evaluation such as false positives and negatives, true positives and negatives, confusion matrix, precision and recall, and receiver operating characteristics (ROC)

• explain the basic methods employed in generating and labeling datasets and ground truth and examples of various datasets such as CMU PIE dataset, Label Me dataset, Caltech 256 dataset, TrecVid, FERET dataset, and Pascal Visual Object Recognition

• explain the fundamental ideas employed in the IU algorithms used for face detection, material detection, image orientation, and a few others

• learn the importance of using context in IU tasks

INTENDED AUDIENCEScientists, engineers, and managers who need to familiarize themselves with IU technology and understand its performance limitations in a diverse set of products and applications. No specific prior knowledge is required except familiarity with general mathematical concepts such as the dot product of two vectors and basic image processing concepts such as histograms, filtering, gradients, etc.

INSTRUCTORMajid Rabbani has 30+ years of experience in digital imaging. He is an Eastman Fellow at Kodak and an adjunct faculty at both RIT and University of Rochester. He is the co-recipient of the 2005 and 1988 Kodak Mees Awards and the co-recipient of two Emmy Engineering Awards for his contributions to digital imaging. He is the co-author of the 1991 book “Digital Image Compression Techniques” and the creator of six video/CDROM courses in the area of digital imaging. In 2012 he received the Electronic Imaging Distinguished Educator Award from SPIE and IS&T for 25 years of educational service to the electronic imaging community. He is a Fellow of SPIE, a Fellow of IEEE, and a Kodak Distinguished Inventor.

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 11

Perception, Cognition, and Next Generation Imaging

SC969Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 8:30 am to 12:30 pm

The world of electronic imaging is an explosion of hardware and software technologies, used in a variety of applications, in a wide range of domains. These technologies provide visual, auditory and tactile information to human observers, whose job it is to make decisions and solve problems. In this course, we will study fundamentals in human perception and cognition, and see how these principles can guide the design of systems that enhance human performance. We will study examples in display technology, image quality, visualization, image search, visual monitoring and haptics, and students will be encouraged to bring forward ongoing problems of interest to them.

LEARNING OUTCOMESThis course will enable you to:• describe basic principles of spatial, temporal, and color processing

by the human visual system, and know where to go for deeper insight

• explore basic cognitive processes, including visual attention and semantics

• develop skills in applying knowledge about human perception and cognition to engineering applications

INTENDED AUDIENCEScientists, engineers, technicians, or managers who are involved in the design, testing or evaluation of electronic imaging systems. Business managers responsible for innovation and new product development. Anyone interested in human perception and the evolution of electronic imaging applications.

INSTRUCTORBernice Rogowitz founded and co-chairs the SPIE/IS&T Conference on Human Vision and Electronic Imaging (HVEI) which is a multi-disciplinary forum for research on perceptual and cognitive issues in imaging systems. Dr. Rogowitz received her PhD from Columbia University in visual psychophysics, worked as a researcher and research manager at the IBM T.J. Watson Research Center for over 20 years, and is currently a consultant in vision, visual analysis and sensory interfaces. She has published over 60 technical papers and has over 12 patents on perceptually-based approaches to visualization, display technology, semantic image search, color, social networking, surveillance, haptic interfaces. She is a Fellow of the SPIE and the IS&T.

Introduction to Digital Color Imaging New

SC1154Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 8:30 am to 12:30 pm

This short course provides an introduction to color science and digital color imaging systems. Foundational knowledge is introduced first via a overview of the basics of color science and perception, color representation, and the physical mechanisms for displaying and printing colors. Building upon this base, an end-to-end systems view of color imaging is presented that covers color management and color image processing for display, capture, and print. A key objective of the course is to highlight the interactions between the different modules in a color imaging system and to illustrate via examples how co-design has played an important role in the development of current digital color imaging devices and algorithms.

LEARNING OUTCOMESThis course will enable you to:• explain how color is perceived starting from a physical stimulus

and proceeding through the successive stages of the visual system by using the concepts of tristimulus values, opponent channel representation, and simultaneous contrast

• describe the common representations for color and spatial content in images and their interrelations with the characteristics of the human visual system

• list basic processing functions in a digital color imaging system, and schematically represent a system from input to output for common devices such as a digital cameras, displays, and color printers

• describe why color management is required and how it is performed• explain the role of color appearance transforms in image color

manipulations for gamut mapping and enhancement• explain how interactions between color and spatial dimensions

are commonly utilized in designing color imaging systems and algorithms

• cite examples of algorithms and systems that break traditional cost, performance, and functionality tradeoffs through system-wide optimization

INTENDED AUDIENCEThe short course is intended for engineers, scientists, students, and managers interested in acquiring a broad- system wide view of digital color imaging systems. Prior familiarity with basics of signal and image processing, in particular Fourier representations, is helpful although not essential for an intuitive understanding.

INSTRUCTORgaurav sharma has over two decades of experience in the design and optimization of color imaging systems and algorithms that spans employment at the Xerox Innovation Group and his current position as a Professor at the University of Rochester in the Departments of Electrical and Computer Engineering and Computer Science. Additionally, he has consulted for several companies on the development of new imaging systems and algorithms. He holds 49 issued patents and has authored over a 150 peer-reviewed publications. He is the editor of the “Digital Color Imaging Handbook” published by CRC Press and currently serves as the Editor-in-Chief for the SPIE/IS&T Journal of Electronic Imaging. Dr. Sharma is a fellow of IEEE, SPIE, and IS&T.

short Courses

12 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

Visualization, Perception, and Color

Introduction to Digital Color Imaging New

SC1154Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 8:30 am to 12:30 pm

This short course provides an introduction to color science and digital color imaging systems. Foundational knowledge is introduced first via a overview of the basics of color science and perception, color representation, and the physical mechanisms for displaying and printing colors. Building upon this base, an end-to-end systems view of color imaging is presented that covers color management and color image processing for display, capture, and print. A key objective of the course is to highlight the interactions between the different modules in a color imaging system and to illustrate via examples how co-design has played an important role in the development of current digital color imaging devices and algorithms.

LEARNING OUTCOMESThis course will enable you to:• explain how color is perceived starting from a physical stimulus

and proceeding through the successive stages of the visual system by using the concepts of tristimulus values, opponent channel representation, and simultaneous contrast

• describe the common representations for color and spatial content in images and their interrelations with the characteristics of the human visual system

• list basic processing functions in a digital color imaging system, and schematically represent a system from input to output for common devices such as a digital cameras, displays, and color printers

• describe why color management is required and how it is performed• explain the role of color appearance transforms in image color

manipulations for gamut mapping and enhancement• explain how interactions between color and spatial dimensions

are commonly utilized in designing color imaging systems and algorithms

• cite examples of algorithms and systems that break traditional cost, performance, and functionality tradeoffs through system-wide optimization

INTENDED AUDIENCEThe short course is intended for engineers, scientists, students, and managers interested in acquiring a broad- system wide view of digital color imaging systems. Prior familiarity with basics of signal and image processing, in particular Fourier representations, is helpful although not essential for an intuitive understanding.

INSTRUCTORgaurav sharma has over two decades of experience in the design and optimization of color imaging systems and algorithms that spans employment at the Xerox Innovation Group and his current position as a Professor at the University of Rochester in the Departments of Electrical and Computer Engineering and Computer Science. Additionally, he has consulted for several companies on the development of new imaging systems and algorithms. He holds 49 issued patents and has authored over a 150 peer-reviewed publications. He is the editor of the “Digital Color Imaging Handbook” published by CRC Press and currently serves as the Editor-in-Chief for the SPIE/IS&T Journal of Electronic Imaging. Dr. Sharma is a fellow of IEEE, SPIE, and IS&T.

Perception, Cognition, and Next Generation Imaging

SC969Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 8:30 am to 12:30 pm

The world of electronic imaging is an explosion of hardware and software technologies, used in a variety of applications, in a wide range of domains. These technologies provide visual, auditory and tactile information to human observers, whose job it is to make decisions and solve problems. In this course, we will study fundamentals in human perception and cognition, and see how these principles can guide the design of systems that enhance human performance. We will study examples in display technology, image quality, visualization, image search, visual monitoring and haptics, and students will be encouraged to bring forward ongoing problems of interest to them.

LEARNING OUTCOMESThis course will enable you to:• describe basic principles of spatial, temporal, and color processing

by the human visual system, and know where to go for deeper insight

• explore basic cognitive processes, including visual attention and semantics

• develop skills in applying knowledge about human perception and cognition to engineering applications

INTENDED AUDIENCEScientists, engineers, technicians, or managers who are involved in the design, testing or evaluation of electronic imaging systems. Business managers responsible for innovation and new product development. Anyone interested in human perception and the evolution of electronic imaging applications.

INSTRUCTORBernice Rogowitz founded and co-chairs the SPIE/IS&T Conference on Human Vision and Electronic Imaging (HVEI) which is a multi-disciplinary forum for research on perceptual and cognitive issues in imaging systems. Dr. Rogowitz received her PhD from Columbia University in visual psychophysics, worked as a researcher and research manager at the IBM T.J. Watson Research Center for over 20 years, and is currently a consultant in vision, visual analysis and sensory interfaces. She has published over 60 technical papers and has over 12 patents on perceptually-based approaches to visualization, display technology, semantic image search, color, social networking, surveillance, haptic interfaces. She is a Fellow of the SPIE and the IS&T.

Perceptual Metrics for Image and Video Quality in a Broader Context: From Perceptual Transparency to Structural Equivalence

SC812Course Level: intermediateCeU: 0.35 $300 Members | $355 Non-Members Usd Wednesday 1:30 pm to 5:30 pm

We will examine objective criteria for the evaluation of image quality that are based on models of visual perception. Our primary emphasis will be on image fidelity, i.e., how close an image is to a given original or reference image, but we will broaden the scope of image fidelity to include structural equivalence. We will also discuss no-reference and limited-reference metrics. We will examine a variety of applications with special emphasis on image and video compression. We will examine near-threshold perceptual metrics, which explicitly account for human visual system (HVS) sensitivity to noise by estimating thresholds above which the distortion is just-noticeable, and supra-threshold metrics, which attempt to quantify visible distortions encountered in high compression

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 13

applications or when there are losses due to channel conditions. We will also consider metrics for structural equivalence, whereby the original and the distorted image have visible differences but both look natural and are of equally high visual quality. We will also take a close look at procedures for evaluating the performance of quality metrics, including database design, models for generating realistic distortions for various applications, and subjective procedures for metric development and testing. Throughout the course we will discuss both the state of the art and directions for future research.Course topics include:• Applications: Image and video compression, restoration, retrieval,

graphics, etc.• Human visual system review• Near-threshold and supra-threshold perceptual quality metrics• Structural similarity metrics• Perceptual metrics for texture analysis and compression – structural

texture similarity metrics• No-reference and limited-reference metrics• Models for generating realistic distortions for different applications• Design of databases and subjective procedures for metric

development and testing• Metric performance comparisons, selection, and general use and

abuse• Embedded metric performance, e.g., for rate-distortion optimized

compression or restoration• Metrics for specific distortions, e.g., blocking and blurring, and for

specific attributes, e.g., contrast, roughness, and glossiness• Multimodal applications

LEARNING OUTCOMESThis course will enable you to:• gain a basic understanding of the properties of the human visual

system and how current applications (image and video compression, restoration, retrieval, etc.) that attempt to exploit these properties

• gain an operational understanding of existing perceptually-based and structural similarity metrics, the types of images/artifacts on which they work, and their failure modes

• review current distortion models for different applications, and how they can be used to modify or develop new metrics for specific contexts

• differentiate between sub-threshold and supra-threshold artifacts, the HVS responses to these two paradigms, and the differences in measuring that response

• establish criteria by which to select and interpret a particular metric for a particular application.

• evaluate the capabilities and limitations of full-reference, limited-reference, and no-reference metrics, and why each might be used in a particular application

INTENDED AUDIENCEImage and video compression specialists who wish to gain an understanding of how performance can be quantified. Engineers and Scientists who wish to learn about objective image and video quality evaluation.Managers who wish to gain a solid overview of image and video quality evaluation. Students who wish to pursue a career in digital image processing. Intellectual Property and Patent Attorneys who wish to gain a more fundamental understanding of quality metrics and the underlying technologies. Government laboratory personnel who work in imaging.Prerequisites: a basic understanding of image compression algorithms, and a background in digital signal processing and basic statistics: frequency-based representations, filtering, distributions.

INSTRUCTORthrasyvoulos Pappas received the S.B., S.M., and Ph.D. degrees in electrical engineering and computer science from MIT in 1979, 1982, and 1987, respectively. From 1987 until 1999, he was a Member of the Technical Staff at Bell Laboratories, Murray Hill, NJ. He is currently a professor in the Department of Electrical and Computer Engineering at Northwestern University, which he joined in 1999. His research interests are in image and video quality and compression, image and video analysis, content-based retrieval, perceptual models for multimedia processing, model-based halftoning, and tactile and multimodal interfaces. Dr. Pappas has served as co-chair of the 2005 SPIE/IS&T Electronic Imaging Symposium, and since 1997 he has been co-chair of the SPIE/IS&T Conference on Human Vision and Electronic Imaging. He has also served as editor-in-chief for the IEEE Transactions on Image Processing from 2010 to 2012. Dr. Pappas is a Fellow of IEEE and SPIE.sheila Hemami received the B.S.E.E. degree from the University of Michigan in 1990, and the M.S.E.E. and Ph.D. degrees from Stanford University in 1992 and 1994, respectively. She was with Hewlett-Packard Laboratories in Palo Alto, California in 1994 and was with the School of Electrical Engineering at Cornell University from 1995-2013. She is currently Professor and Chair of the Department of Electrical & Computer Engineering at Northeastern University in Boston, MA. Dr. Hemami’s research interests broadly concern communication of visual information from the perspectives of both signal processing and psychophysics. She has held various technical leadership positions in the IEEE, served as editor-in-chief for the IEEE Transactions on Multimedia from 2008 to 2010, and was elected a Fellow of the IEEE in 2009 for her for contributions to robust and perceptual image and video communications.

HDR Imaging in Cameras, Displays and Human Vision

SC1097Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd Monday 8:30 am to 12:30 pm

High-dynamic range (HDR) imaging is a significant improvement over conventional imaging. After a description of the dynamic range problem in image acquisition, this course focuses on standard methods of creating and manipulating HDR images, replacing myths with measurements of scenes, camera images, and visual appearances. In particular, the course presents measurements about the limits of accurate camera acquisition and the usable range of light for displays of our vision system. Regarding our vision system, the course discusses the role of accurate vs. non-accurate luminance recording for the final appearance of a scene, presenting the quality and the characteristics of visual information actually available on the retina. It ends with a discussion of the principles of tone rendering and the role of spatial comparison.

LEARNING OUTCOMESThis course will enable you to:• explore the history of HDR imaging• describe dynamic range and quantization: the ‘salame’ metaphor• compare single and multiple-exposure for scene capture• measure optical limits in acquisition and visualization• discover relationship between HDR range and scene dependency ;

the effect of glare• explore the limits of our vision system on HDR• calculate retinal luminance• put in relationship the HDR images and the visual appearance• identify tone-rendering problems and spatial methods• verify the changes in color spaces due to dynamic range expansion

short Courses

14 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

INTENDED AUDIENCEColor scientists, software and hardware engineers, photographers, cinematographers, production specialists, and students interested in using HDR images in real applications.

INSTRUCTORalessandro Rizzi has been researching in the field of digital imaging and vision since 1990. His main research topic is the use of color information in digital images with particular attention to color vision mechanisms. He is Associate professor at the Dept. of Computer Science at University of Milano, teaching Fundamentals of Digital Imaging, Multimedia Video, and Human-Computer Interaction. He is one of the founders of the Italian Color Group and member of several program committees of conferences related to color and digital imaging.John McCann received a degree in Biology from Harvard College in 1964. He worked in, and managed, the Vision Research Laboratory at Polaroid from 1961 to 1996. He has studied human color vision, digital image processing, large format instant photography, and the reproduction of fine art. His publications and patents have studied Retinex theory, color constancy, color from rod/cone interactions at low light levels, appearance with scattered light, and HDR imaging. He is a Fellow of the IS&T and the Optical Society of America (OSA). He is a past President of IS&T and the Artists Foundation, Boston. He is the IS&T/OSA 2002 Edwin H. Land Medalist, and IS&T 2005 Honorary Member.

Camera Characterization and Camera Models New

SC1157Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

Image Quality depends not only on the camera components, but also on lighting, photographer skills, picture content, viewing conditions and to some Image Quality depends not only on the camera components, but also on lighting, photographer skills, picture content, viewing conditions and to some extent on the viewer. While measuring or predicting a camera image quality as perceived by users can be an overwhelming task, many camera attributes can be accurately characterized with objective measurement methodologies.This course provides an insight on camera models, examining the mathematical models of the three main components of a camera (optics, sensor and ISP) and their interactions as a system (camera) or subsystem (camera at the raw level). The course describes methodologies to characterize the camera as a system or subsystem (modeled from the individual component mathematical models), including lab equipment, lighting systems, measurements devices, charts, protocols and software algorithms. Attributes to be discussed include exposure, color response, sharpness, shading, chromatic aberrations, noise, dynamic range, exposure time, rolling shutter, focusing system, and image stabilization. The course will also address aspects that specifically affect video capture, such as video stabilization, video codec, and temporal noise.The course “SC1049 Benchmarking Image Quality of Still and Video Imaging Systems,” describing perceptual models and subjective measurements, complements the treatment of camera models and objective measurements provided here.

LEARNING OUTCOMESThis course will enable you to:• build up and operate a camera characterization lab• master camera characterization protocols• understand camera models• define test plans• compare cameras as a system (end picture), subsystem (raw) or

component level (optics, sensor, ISP)• define data sets for benchmarks

INTENDED AUDIENCEImage scientists, camera designers.

INSTRUCTORJonathan Phillips is a senior image quality scientist in the camera group at NVIDIA. His involvement in the imaging industry spans over 20 years, including two decades at Eastman Kodak Company. His focus has been on photographic quality, with an emphasis on psychophysical testing for both product development and fundamental perceptual studies. His broad experience has included image quality work with capture, display, and print technologies. He received the 2011 I3A Achievement Award for his work on camera phone image quality and headed up the 2012 revision of ISO 20462 - Psychophysical experimental methods for estimating image quality - Part 3: Quality ruler method. He completed his graduate work in color science in the Center for Imaging Science at Rochester Institute of Technology and his chemistry undergraduate at Wheaton College (IL).Harvey (Hervé) Hornung is Camera Characterization Guru at Marvell Semiconductor Inc. His main skill is camera objective characterization and calibration. He worked on a camera array at Pelican Imaging for 2 years and worked at DxO Labs for 8 years as a technical leader in the Image Quality Evaluation business unit, including the most comprehensive objective image quality evaluation product DxO Analyzer and the famous website DxOMark. Harvey has been active in computer graphics and image processing for 20 years and teaches camera characterization and benchmarking at different conferences.Hugh denman is a video processing and quality specialist at Google, involved in video quality assessment with YouTube and camera quality assessment for Google Chrome devices. Hugh was previously a founding engineer with Green Parrot Pictures, a video algorithms boutiquebased in Ireland and acquired by Google in 2011. While at Google, he has consulted on camera quality assessment with numerous sensor, ISP, and module vendors, and co-ordinates the Google Chrome OS image quality specification.

Benchmarking image Quality of still and Video imaging systems

SC1049Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd Monday 8:30 am to 5:30 pm

Because image quality is multi-faceted, generating a concise and relevant evaluative summary of photographic systems can be challenging. Indeed, benchmarking the image quality of still and video imaging systems requires that the assessor understands not only the capture device itself, but also the imaging applications for the system.This course explains how objective metrics and subjective methodologies are used to benchmark image quality of photographic still image and video capture devices. The course will go through key image quality attributes and the flaws that degrade those attributes, including causes and consequences of the flaws on perceived quality. Content will describe various subjective evaluation methodologies as well as objective measurement methodologies relying on existing standards from ISO, IEEE/CPIQ, ITU and beyond. Because imaging systems are intended for visual purposes, emphasis will be on the value of using objective metrics which are perceptually correlated and generating benchmark data from the combination of objective and subjective metrics.The course “SC1157 Camera Characterization and Camera Models,” describing camera models and objective measurements, complements the treatment of perceptual models and subjective measurements provided here.

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 15

LEARNING OUTCOMESThis course will enable you to:• summarize the overall image quality of a camera• identify defects that degrade image quality in natural images and

what component of the camera should/could be improved for better image quality

• evaluate the impact various output use cases have on overall image quality

• define subjective test plans and protocols• compare the image quality of a set of cameras• set up benchmarking protocols depending on use cases• build up a subjective image quality lab

INTENDED AUDIENCEImage scientists, engineers, or managers who wish to learn more about image quality and how to evaluate still and video cameras for various applications. A good understanding of imaging and how a camera works is assumed.

INSTRUCTORJonathan Phillips is a senior image quality scientist in the camera group at NVIDIA. His involvement in the imaging industry spans over 20 years, including two decades at Eastman Kodak Company. His focus has been on photographic quality, with an emphasis on psychophysical testing for both product development and fundamental perceptual studies. His broad experience has included image quality work with capture, display, and print technologies. He received the 2011 I3A Achievement Award for his work on camera phone image quality and headed up the 2012 revision of ISO 20462 - Psychophysical experimental methods for estimating image quality - Part 3: Quality ruler method. He completed his graduate work in color science in the Center for Imaging Science at Rochester Institute of Technology and his chemistry undergraduate at Wheaton College (IL).Harvey (Hervé) Hornung is Camera Characterization Guru at Marvell Semiconductor Inc. His main skill is camera objective characterization and calibration. He worked on a camera array at Pelican Imaging for 2 years and worked at DxO Labs for 8 years as a technical leader in the Image Quality Evaluation business unit, including the most comprehensive objective image quality evaluation product DxO Analyzer and the famous website DxOMark. Harvey has been active in computer graphics and image processing for 20 years and teaches camera characterization and benchmarking at different conferences.Hugh denman is a video processing and quality specialist at Google, involved in video quality assessment with YouTube and camera quality assessment for Google Chrome devices. Hugh was previously a founding engineer with Green Parrot Pictures, a video algorithms boutique based in Ireland and acquired by Google in 2011. While at Google, he has consulted on camera quality assessment with numerous sensor, ISP, and module vendors, and co-ordinates the Google Chrome OS image quality specification.

Understanding and Interpreting Images

SC1015Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd tuesday 1:30 pm to 5:30 pm

A key problem in computer vision is image and video understanding, which can be defined as the task of recognizing objects in the scene and their corresponding relationships and semantics, in addition to identifying the scene category itself. Image understanding technology has numerous applications among which are smart capture devices, intelligent image processing, semantic image search and retrieval, image/video utilization (e.g., ratings on quality, usefulness, etc.), security and surveillance, intelligent asset selection and targeted advertising.This tutorial provides an introduction to the theory and practice of image understanding algorithms by studying the various technologies that serve the three major components of a generalized IU system, namely, feature extraction and selection, machine learning tools used for classification, and datasets and ground truth used for training the classifiers. Following this general development, a few application examples are studied in

more detail to gain insight into how these technologies are employed in a practical IU system. Applications include face detection, sky detection, image orientation detection, main subject detection, and content based image retrieval (CBIR). Furthermore, realtime demos including face detection and recognition, CBIR, and automatic zooming and cropping of images based on main-subject detection are provided.

LEARNING OUTCOMESThis course will enable you to:• learn the various applications of IU and the scope of its consumer

and commercial uses• explain the various technologies used in image feature extraction

such as global, block-based or region-based color histograms and moments, the “tiny” image, GIST, histogram of oriented gradients (HOG), scale-invariant feature transform (SIFT), speeded-up robust features (SURF), bag of words, etc.

• explain the various machine learning paradigms and the fundamental techniques used for classification such as Bayesian classifiers, linear support vector machines (SVM) and nonlinear kernels, boosting techniques (e.g., AdaBoost), k-nearest neighbors, .etc.

• explain the concepts used for classifier evaluation such as false positives and negatives, true positives and negatives, confusion matrix, precision and recall, and receiver operating characteristics (ROC)

• explain the basic methods employed in generating and labeling datasets and ground truth and examples of various datasets such as CMU PIE dataset, Label Me dataset, Caltech 256 dataset, TrecVid, FERET dataset, and Pascal Visual Object Recognition

• explain the fundamental ideas employed in the IU algorithms used for face detection, material detection, image orientation, and a few others

• learn the importance of using context in IU tasks

INTENDED AUDIENCEScientists, engineers, and managers who need to familiarize themselves with IU technology and understand its performance limitations in a diverse set of products and applications. No specific prior knowledge is required except familiarity with general mathematical concepts such as the dot product of two vectors and basic image processing concepts such as histograms, filtering, gradients, etc.

INSTRUCTORMajid Rabbani has 30+ years of experience in digital imaging. He is an Eastman Fellow at Kodak and an adjunct faculty at both RIT and University of Rochester. He is the co-recipient of the 2005 and 1988 Kodak Mees Awards and the co-recipient of two Emmy Engineering Awards for his contributions to digital imaging. He is the co-author of the 1991 book “Digital Image Compression Techniques” and the creator of six video/CDROM courses in the area of digital imaging. In 2012 he received the Electronic Imaging Distinguished Educator Award from SPIE and IS&T for 25 years of educational service to the electronic imaging community. He is a Fellow of SPIE, a Fellow of IEEE, and a Kodak Distinguished Inventor.

Image Enhancement, Deblurring and Super-Resolution

SC468Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

This course discusses some of the advanced algorithms in the field of digital image processing. In particular, it familiarizes the audience with the understanding, design, and implementation of advanced algorithms used in deblurring, contrast enhancement, sharpening, noise reduction, and super-resolution in still images and video. Some of the applications include medical imaging, entertainment imaging, consumer and professional digital still cameras/camcorders, forensic imaging, and surveillance. Many image examples complement the technical descriptions.

short Courses

16 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

LEARNING OUTCOMESThis course will enable you to:• explain the various nonadaptive and adaptive techniques used

in image contrast enhancement. Examples include PhotoShop commands such as Brightness/Contrast, Auto Levels, Equalize and Shadow/Highlights, or Pizer’s technique and Moroney’s approach

• explain the fundamental techniques used in image Dynamic Range Compression (DRC).Illustrate using the fast bilateral filtering by Dorsey and Durand as an example.

• explain the various techniques used in image noise removal, such as bilateral filtering, sigma filtering and K-Nearest Neighbor

• explain the various techniques used in image sharpening such as nonlinear unsharp masking, etc.

• explain the basic techniques used in image deblurring (restoration) such as inverse filtering and Wiener filtering

• explain the fundamental ideas behind achieving image super-resolution from multiple lower resolution images of the same scene

• explain how motion information can be utilized in image sequences to improve the performance of various enhancement techniques such as noise removal, sharpening, and super-resolution

INTENDED AUDIENCEScientists, engineers, and managers who need to understand and/or apply the techniques employed in digital image processing in various products in a diverse set of applications such as medical imaging, professional and consumer imaging, forensic imaging, etc. Prior knowledge of digital filtering (convolution) is necessary for understanding the (Wiener filtering and inverse filtering) concepts used in deblurring (about 20% of the course content).

INSTRUCTORMajid Rabbani has 30+ years of experience in digital imaging. He is an Eastman Fellow at Kodak and an adjunct faculty at both RIT and University of Rochester. He is the co-recipient of the 2005 and 1988 Kodak Mees Awards and the co-recipient of two Emmy Engineering Awards for his contributions to digital imaging. He is the co-author of the 1991 book “Digital Image Compression Techniques” and the creator of six video/CDROM courses in the area of digital imaging. In 2012 he received the Electronic Imaging Distinguished Educator Award from SPIE and IS&T for 25 years of educational service to the electronic imaging community. He is a Fellow of SPIE, a Fellow of IEEE, and a Kodak Distinguished Inventor.

Image Quality and Evaluation of Cameras In Mobile Devices

SC1058Course Level: intermediateCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

Digital and mobile imaging camera system performance is determined by a combination of sensor characteristics, lens characteristics, and image-processing algorithms. As pixel size decreases, sensitivity decreases and noise increases, requiring a more sophisticated noise-reduction algorithm to obtain good image quality. Furthermore, small pixels require high-resolution optics with low chromatic aberration and very small blur circles. Ultimately, there is a tradeoff between noise, resolution, sharpness, and the quality of an image.This short course provides an overview of “light in to byte out” issues associated with digital and mobile imaging cameras. The course covers, optics, sensors, image processing, and sources of noise in these cameras, algorithms to reduce it, and different methods of characterization. Although noise is typically measured as a standard deviation in a patch with uniform color, it does not always accurately represent human perception. Based on the “visual noise” algorithm described in ISO 15739, an improved approach for measuring noise as an image quality aspect will be demonstrated. The course shows a way to optimize image quality by balancing the tradeoff between noise and resolution. All methods discussed will use images as examples.

LEARNING OUTCOMESThis course will enable you to:• describe pixel technology and color filtering• describe illumination, photons, sensor and camera radiometry• select a sensor for a given application• describe and measure sensor performance metrics• describe and understand the optics of digital and mobile imaging

systems• examine the difficulties in minimizing sensor sizes• assess the need for per unit calibrations in digital still cameras and

mobile imaging devices• learn about noise, its sources, and methods of managing it• make noise and resolution measurements based on international

standardso EMVA 1288o ISO 14524 (OECF)/ISO 15739 (Noise)o Visual Noiseo ISO 12233 (Resolution)• assess influence of the image pipeline on noise• utilize today’s algorithms to reduce noise in images• measure noise based on human perception• optimize image quality by balancing noise reduction and resolution• compare hardware tradeoffs, noise reduction algorithms, and settings

for optimal image quality

INTENDED AUDIENCEAll people evaluating the image quality of digital cameras, mobile cameras, and scanners would benefit from participation. Technical staff of manufacturers, managers of digital imaging projects, as well as journalists and students studying image technology are among the intended audience.

INSTRUCTORKevin Matherson is a senior image scientist in the research and development lab of Hewlett-Packard’s Imaging and Printing Group and has worked in the field of digital imaging since 1985. He joined Hewlett Packard in 1996 and has participated in the development of all HP digital and mobile imaging cameras produced since that time. His primary research interests focus on noise characterization, optical system analysis, and the optimization of camera image quality. Dr. Matherson currently leads the camera characterization laboratory in Fort Collins and holds Masters and PhD degrees in Optical Sciences from the University of Arizona.Uwe artmann studied Photo Technology at the University of Applied Sciences in Cologne following an apprenticeship as a photographer, and finished with the German ‘Diploma Engineer’. He is now CTO at Image Engineering, an independent test lab for imaging devices and manufacturer of all kinds of test equipment for these devices. His special interest is the influence of noise reduction on image quality and MTF measurement in general.

High Dynamic Range Imaging: Sensors and Architectures

SC967Course Level: intermediateCeU: 0.65 $570 Members | $680 Non-Members Usd sunday 8:30 am to 5:30 pm

This course provides attendees with an intermediate knowledge of high dynamic range image sensors and techniques for industrial and non-industrial applications. The course describes various sensor and pixel architectures to achieve high dynamic range imaging as well as software approaches to make high dynamic range images out of lower dynamic range sensors or image sets. The course follows a mathematic approach to define the amount of information that can be extracted from the image for each of the methods described. Some methods for automatic control of exposure and dynamic range of image sensors and other issues like color and glare will be introduced.

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 17

LEARNING OUTCOMESThis course will enable you to:• describe various approaches to achieve high dynamic range imaging• predict the behavior of a given sensor or architecture on a scene• specify the sensor or system requirements for a high dynamic range

application• classify a high dynamic range application into one of several

standard types

INTENDED AUDIENCEThis material is intended for anyone who needs to learn more about quantitative side of high dynamic range imaging. Optical engineers, electronic engineers and scientists will find useful information for their next high dynamic range application.

INSTRUCTORarnaud darmont is owner and CEO of Aphesa, a company founded in 2008 and specialized in custom camera developments, image sensor consulting, the EMVA1288 standard and camera benchmarking. He holds a degree in Electronic Engineering from the University of Liège (Belgium). Prior to founding Aphesa, he worked for over 7 years in the field of CMOS image sensors and high dynamic range imaging.

COURSE PRICE INCLUDES the text High Dynamic Range Imaging: Sen-sors and Architectures (SPIE Press, 2012) by Arnaud Darmont.

Recent Trends in Imaging Devices

SC1048Course Level: intermediateCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 1:30 pm to 5:30 pm

In the last decade, consumer imaging devices such as camcorders, digital cameras, smartphones and tablets have been dramatically diffused. The increasing of their computational performances combined with an higher storage capability allowed to design and implement advanced imaging systems that can automatically process visual data with the purpose of understanding the content of the observed scenes. In the next years, we will be conquered by wearable visual devices acquiring, streaming and logging video of our daily life. This new exciting imaging domain, in which the scene is observed from a first person point of view, poses new challenges to the research community, as well as gives the opportunity to build new applications. Many results in image processing and computer vision related to motion analysis, tracking, scene and object recognition and video summarization, have to be re-defined and re-designed by considering the emerging wearable imaging domain.In the first part of this course we will review the main algorithms involved in the single-sensor imaging devices pipeline describing also some advanced applications. In the second part of the course we will give an overview of the recent trends of imaging devices considering the wearable domain. Challenges and applications will be discussed considering the state-of-the-art literature.

LEARNING OUTCOMESThis course will enable you to:• describe operating single-sensor imaging systems for commercial

and scientific imaging applications• explain how imaging data are acquired and processed (demosaicing,

color calibration, etc.)• list specifications and requirements to select a specific algorithm for

your imaging application• recognize performance differences among imaging pipeline

technologies• become familiar with current and future imaging technologies,

challenges and applications

INTENDED AUDIENCEThis course is intended for those with a general computing background, and is interested in the topic of image processing and computer vision. Students, researchers, and practicing engineers should all be able to benefit from the general overview of the field and the introduction of the most recent advances of the technology.

INSTRUCTORsebastiano Battiato received his Ph.D. in computer science and applied mathematics in 1999, and led the “Imaging” team at STMicroelectronics in Catania through 2003. He joined the Department of Mathematics and Computer Science at the University of Catania as assistant professor in 2004 and became associate professor in 2011. His research interests include image enhancement and processing, image coding, camera imaging technology and multimedia forensics. He has published more than 90 papers in international journals, conference proceedings and book chapters. He is a co-inventor of about 15 international patents, reviewer for several international journals, and has been regularly a member of numerous international conference committees. He is director (and co-founder) of the International Computer Vision Summer School (ICVSS), Sicily, Italy. He is a senior member of the IEEE.giovanni Farinella received the M.S. degree in Computer Science (egregia cum laude) from the University of Catania, Italy, in 2004, and the Ph.D. degree in computer science in 2008. He joined the Image Processing Laboratory (IPLAB) at the Department of Mathematics and Computer Science, University of Catania, in 2008, as a Contract Researcher. He is an Adjunct Professor of Computer Science at the University of Catania (since 2008) and a Contract Professor of Computer Vision at the Academy of Arts of Catania (since 2004). His research interests lie in the fields of computer vision, pattern recognition and machine learning. He has edited four volumes and coauthored more than 60 papers in international journals, conference proceedings and book chapters. He is a co-inventor of four international patents. He serves as a reviewer and on the programme committee for major international journals and international conferences. He founded (in 2006) and currently directs the International Computer Vision Summer School (ICVSS).

Stereoscopic Display Application Issues

SC060Course Level: intermediateCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

When correctly implemented, stereoscopic 3D displays can provide significant benefits in many areas, including endoscopy and other medical imaging, teleoperated vehicles and telemanipulators, CAD, molecular modeling, 3D computer graphics, 3D visualization, photo interpretation, video-based training, and entertainment. This course conveys a concrete understanding of basic principles and pitfalls that should be considered when setting up stereoscopic systems and producing stereoscopic content. The course will demonstrate a range of stereoscopic hardware and 3D imaging & display principles, outline the key issues in an ortho-stereoscopic video display setup, and show 3D video from a wide variety of applied stereoscopic imaging systems.

LEARNING OUTCOMESThis course will enable you to:• list critical human factors guidelines for stereoscopic display

configuration and implementation • calculate optimal camera focal length, separation, display size, and

viewing distance to achieve a desired level of depth acuity • examine comfort limits for focus/fixation mismatch and on-

screen parallax values as a function of focal length, separation, convergence, display size, and viewing-distance factors

• set up a large-screen stereo display system using AV equipment readily available at most conference sites, for 3D stills and for full-motion 3D video

• rank the often-overlooked side-benefits of stereoscopic displays that should be included in a cost/benefit analysis for proposed 3D applications

short Courses

18 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

• explain common pitfalls in designing tests to compare 2D vs. 3D displays

• calculate and demonstrate the distortions in perceived 3D space due to camera and display parameters

• design and set up an ortho-stereoscopic 3D imaging/display system • understand the projective geometry involved in stereoscopic

modeling • determine the problems, and the solutions, for converting

stereoscopic video across video standards such as NTSC and PAL • work with stereoscopic 3D video and stills -using analog and digital

methods of capture/filming, encoding, storage, format conversion, display, and publishing

• describe the trade-offs among currently available stereoscopic display system technologies and determine which will best match a particular application

• understand existing and developing stereoscopic standards

INTENDED AUDIENCEThis course is designed for engineers, scientists, and program managers who are using, or considering using, stereoscopic 3D displays in their applications. The solid background in stereoscopic system fundamentals, along with many examples of advanced 3D display applications, makes this course highly useful both for those who are new to stereoscopic 3D and also for those who want to advance their current understanding and utilization of stereoscopic systems.

INSTRUCTORJohn Merritt is a 3D display systems consultant at The Merritt Group, Williamsburg, MA, USA with more than 25 years experience in the design and human-factors evaluation of stereoscopic video displays for telepresence and telerobotics, off-road mobility, unmanned vehicles, night vision devices, photo interpretation, scientific visualization, and medical imaging.andrew Woods is a research engineer at Curtin University’s Centre for Marine Science and Technology in Perth, Western Australia. He has over 20 years of experience working on the design, application, and evaluation of stereoscopic technologies for industrial and entertainment applications.

3D Imaging

SC927Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd tuesday 8:30 am to 12:30 pm

The purpose of this course is to introduce algorithms for 3D structure inference from 2D images. In many applications, inferring 3D structure from 2D images can provide crucial sensing information. The course will begin by reviewing geometric image formation and mathematical concepts that are used to describe it, and then move to discuss algorithms for 3D model reconstruction.The problem of 3D model reconstruction is an inverse problem in which we need to infer 3D information based on incomplete (2D) observations. We will discuss reconstruction algorithms which utilize information from multiple views. Reconstruction requires the knowledge of some intrinsic and extrinsic camera parameters, and the establishment of correspondence between views. We will discuss algorithms for determining camera parameters (camera calibration) and for obtaining correspondence using epipolar constraints between views. The course will also introduce relevant 3D imaging software components available through the industry standard OpenCV library.

LEARNING OUTCOMESThis course will enable you to:• describe fundamental concepts in 3D imaging• develop algorithms for 3D model reconstruction from 2D images• incorporate camera calibration into your reconstructions• classify the limitations of reconstruction techniques• use industry standard tools for developing 3D imaging applications

INTENDED AUDIENCEEngineers, researchers, and software developers, who develop imaging applications and/or use camera sensors for inspection, control, and analysis. The course assumes basic working knowledge concerning matrices and vectors.

INSTRUCTORgady agam is an Associate Professor of Computer Science at the Illinois Institute of Technology. He is the director of the Visual Computing Lab at IIT which focuses on imaging, geometric modeling, and graphics applications. He received his PhD degree from Ben-Gurion University in 1999.

Digital Camera and Scanner Performance Evaluation: Standards and Measurement

SC807Course Level: intermediateCeU: 0.35 $300 Members | $355 Non-Members Usd tuesday 8:30 am to 12:30 pm

This is an updated course on imaging performance measurement methods for digital image capture devices and systems. We introduce several ISO measurement protocols for camera resolution, tone-transfer, noise, etc. We focus on the underlying sources of variability in system performance, measurement error, and how to manage this variability in working environments. The propagation of measurement variability will be described for several emerging standard methods for; image texture, distortion, color shading, flare and chromatic aberration. Using actual measurements we demonstrate how standards can be adapted to evaluate capture devices ranging from cell phone cameras to scientific detectors. New this year, we will be discussing the use of raw files to investigate intrinsic signal and noise characteristics of the image-capture path.

LEARNING OUTCOMESThis course will enable you to:• appreciate the difference between imaging performance and image

quality• interpret and apply the different flavors of each ISO performance

method• identify sources of system variability, and understand resulting

measurement error• distill information-rich ISO metrics into single measures for quality

assurance• adapt standard methods for use in factory testing• select software elements (with Matlab examples) for performance

evaluation programs• use raw images to investigate intrinsic/limiting imaging perfromance

INTENDED AUDIENCEAlthough technical in content, this course is intended for a wide audience: image scientists, quality engineers, and others evaluating digital camera and scanner performance. No background in imaging performance (MTF, etc.) evaluation will be assumed, although the course will provide previous attendees with an update and further insight for implementation. Detailed knowledge of Matlab is not needed, but exposure to similar software environments will be helpful.

INSTRUCTORPeter Burns is a consultant working in imaging system evaluation, modeling, and image processing. Previously he worked for Carestream Health, Xerox and Eastman Kodak. A frequent speaker at technical conferences, he has contributed to several imaging standards. He has taught several imaging courses: at Kodak, SPIE, and IS&T technical conferences, and at the Center for Imaging Science, RIT.

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 19

donald Williams , founder of Image Science Associates, was with Kodak Research Laboratories. His work focuses on quantitative signal and noise performance metrics for digital capture imaging devices, and imaging fidelity issues. He co-leads the TC42 standardization efforts on digital print and film scanner resolution (ISO 16067-1, ISO 16067-2) scanner dynamic range (ISO 21550) and is the editor for the second edition to digital camera resolution (ISO 12233).

Image Processing

Understanding and Interpreting Images

SC1015Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd tuesday 1:30 pm to 5:30 pm

A key problem in computer vision is image and video understanding, which can be defined as the task of recognizing objects in the scene and their corresponding relationships and semantics, in addition to identifying the scene category itself. Image understanding technology has numerous applications among which are smart capture devices, intelligent image processing, semantic image search and retrieval, image/video utilization (e.g., ratings on quality, usefulness, etc.), security and surveillance, intelligent asset selection and targeted advertising.This tutorial provides an introduction to the theory and practice of image understanding algorithms by studying the various technologies that serve the three major components of a generalized IU system, namely, feature extraction and selection, machine learning tools used for classification, and datasets and ground truth used for training the classifiers. Following this general development, a few application examples are studied in more detail to gain insight into how these technologies are employed in a practical IU system. Applications include face detection, sky detection, image orientation detection, main subject detection, and content based image retrieval (CBIR). Furthermore, realtime demos including face detection and recognition, CBIR, and automatic zooming and cropping of images based on main-subject detection are provided.

LEARNING OUTCOMESThis course will enable you to:• learn the various applications of IU and the scope of its consumer

and commercial uses• explain the various technologies used in image feature extraction

such as global, block-based or region-based color histograms and moments, the “tiny” image, GIST, histogram of oriented gradients (HOG), scale-invariant feature transform (SIFT), speeded-up robust features (SURF), bag of words, etc.

• explain the various machine learning paradigms and the fundamental techniques used for classification such as Bayesian classifiers, linear support vector machines (SVM) and nonlinear kernels, boosting techniques (e.g., AdaBoost), k-nearest neighbors, .etc.

• explain the concepts used for classifier evaluation such as false positives and negatives, true positives and negatives, confusion matrix, precision and recall, and receiver operating characteristics (ROC)

• explain the basic methods employed in generating and labeling datasets and ground truth and examples of various datasets such as CMU PIE dataset, Label Me dataset, Caltech 256 dataset, TrecVid, FERET dataset, and Pascal Visual Object Recognition

• explain the fundamental ideas employed in the IU algorithms used for face detection, material detection, image orientation, and a few others

• learn the importance of using context in IU tasks

INTENDED AUDIENCEScientists, engineers, and managers who need to familiarize themselves with IU technology and understand its performance limitations in a diverse set of products and applications. No specific prior knowledge is required except familiarity with general mathematical concepts such as the dot product of two vectors and basic image processing concepts such as histograms, filtering, gradients, etc.

INSTRUCTORMajid Rabbani has 30+ years of experience in digital imaging. He is an Eastman Fellow at Kodak and an adjunct faculty at both RIT and University of Rochester. He is the co-recipient of the 2005 and 1988 Kodak Mees Awards and the co-recipient of two Emmy Engineering Awards for his contributions to digital imaging. He is the co-author of the 1991 book “Digital Image Compression Techniques” and the creator of six video/CDROM courses in the area of digital imaging. In 2012 he received the Electronic Imaging Distinguished Educator Award from SPIE and IS&T for 25 years of educational service to the electronic imaging community. He is a Fellow of SPIE, a Fellow of IEEE, and a Kodak Distinguished Inventor.

Image Enhancement, Deblurring and Super-Resolution

SC468Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

This course discusses some of the advanced algorithms in the field of digital image processing. In particular, it familiarizes the audience with the understanding, design, and implementation of advanced algorithms used in deblurring, contrast enhancement, sharpening, noise reduction, and super-resolution in still images and video. Some of the applications include medical imaging, entertainment imaging, consumer and professional digital still cameras/camcorders, forensic imaging, and surveillance. Many image examples complement the technical descriptions.

LEARNING OUTCOMESThis course will enable you to:• explain the various nonadaptive and adaptive techniques used

in image contrast enhancement. Examples include PhotoShop commands such as Brightness/Contrast, Auto Levels, Equalize and Shadow/Highlights, or Pizer’s technique and Moroney’s approach

• explain the fundamental techniques used in image Dynamic Range Compression (DRC).Illustrate using the fast bilateral filtering by Dorsey and Durand as an example.

• explain the various techniques used in image noise removal, such as bilateral filtering, sigma filtering and K-Nearest Neighbor

• explain the various techniques used in image sharpening such as nonlinear unsharp masking, etc.

• explain the basic techniques used in image deblurring (restoration) such as inverse filtering and Wiener filtering

• explain the fundamental ideas behind achieving image super-resolution from multiple lower resolution images of the same scene

• explain how motion information can be utilized in image sequences to improve the performance of various enhancement techniques such as noise removal, sharpening, and super-resolution

INTENDED AUDIENCEScientists, engineers, and managers who need to understand and/or apply the techniques employed in digital image processing in various products in a diverse set of applications such as medical imaging, professional and consumer imaging, forensic imaging, etc. Prior knowledge of digital filtering (convolution) is necessary for understanding the (Wiener filtering and inverse filtering) concepts used in deblurring (about 20% of the course content).

short Courses

20 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

INSTRUCTORMajid Rabbani has 30+ years of experience in digital imaging. He is an Eastman Fellow at Kodak and an adjunct faculty at both RIT and University of Rochester. He is the co-recipient of the 2005 and 1988 Kodak Mees Awards and the co-recipient of two Emmy Engineering Awards for his contributions to digital imaging. He is the co-author of the 1991 book “Digital Image Compression Techniques” and the creator of six video/CDROM courses in the area of digital imaging. In 2012 he received the Electronic Imaging Distinguished Educator Award from SPIE and IS&T for 25 years of educational service to the electronic imaging community. He is a Fellow of SPIE, a Fellow of IEEE, and a Kodak Distinguished Inventor.

Recent Trends in Imaging Devices

SC1048Course Level: intermediateCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 1:30 pm to 5:30 pm

In the last decade, consumer imaging devices such as camcorders, digital cameras, smartphones and tablets have been dramatically diffused. The increasing of their computational performances combined with an higher storage capability allowed to design and implement advanced imaging systems that can automatically process visual data with the purpose of understanding the content of the observed scenes. In the next years, we will be conquered by wearable visual devices acquiring, streaming and logging video of our daily life. This new exciting imaging domain, in which the scene is observed from a first person point of view, poses new challenges to the research community, as well as gives the opportunity to build new applications. Many results in image processing and computer vision related to motion analysis, tracking, scene and object recognition and video summarization, have to be re-defined and re-designed by considering the emerging wearable imaging domain.In the first part of this course we will review the main algorithms involved in the single-sensor imaging devices pipeline describing also some advanced applications. In the second part of the course we will give an overview of the recent trends of imaging devices considering the wearable domain. Challenges and applications will be discussed considering the state-of-the-art literature.

LEARNING OUTCOMESThis course will enable you to:• describe operating single-sensor imaging systems for commercial

and scientific imaging applications• explain how imaging data are acquired and processed (demosaicing,

color calibration, etc.)• list specifications and requirements to select a specific algorithm for

your imaging application• recognize performance differences among imaging pipeline

technologies• become familiar with current and future imaging technologies,

challenges and applications

INTENDED AUDIENCEThis course is intended for those with a general computing background, and is interested in the topic of image processing and computer vision. Students, researchers, and practicing engineers should all be able to benefit from the general overview of the field and the introduction of the most recent advances of the technology.

INSTRUCTORsebastiano Battiato received his Ph.D. in computer science and applied mathematics in 1999, and led the “Imaging” team at STMicroelectronics in Catania through 2003. He joined the Department of Mathematics and Computer Science at the University of Catania as assistant professor in 2004 and became associate professor in 2011. His research interests include image enhancement and processing, image coding, camera imaging technology and multimedia forensics. He has published more than 90 papers in international journals, conference proceedings and book

chapters. He is a co-inventor of about 15 international patents, reviewer for several international journals, and has been regularly a member of numerous international conference committees. He is director (and co-founder) of the International Computer Vision Summer School (ICVSS), Sicily, Italy. He is a senior member of the IEEE.giovanni Farinella received the M.S. degree in Computer Science (egregia cum laude) from the University of Catania, Italy, in 2004, and the Ph.D. degree in computer science in 2008. He joined the Image Processing Laboratory (IPLAB) at the Department of Mathematics and Computer Science, University of Catania, in 2008, as a Contract Researcher. He is an Adjunct Professor of Computer Science at the University of Catania (since 2008) and a Contract Professor of Computer Vision at the Academy of Arts of Catania (since 2004). His research interests lie in the fields of computer vision, pattern recognition and machine learning. He has edited four volumes and coauthored more than 60 papers in international journals, conference proceedings and book chapters. He is a co-inventor of four international patents. He serves as a reviewer and on the programme committee for major international journals and international conferences. He founded (in 2006) and currently directs the International Computer Vision Summer School (ICVSS).

Introduction to Digital Color Imaging New

SC1154Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 8:30 am to 12:30 pm

This short course provides an introduction to color science and digital color imaging systems. Foundational knowledge is introduced first via a overview of the basics of color science and perception, color representation, and the physical mechanisms for displaying and printing colors. Building upon this base, an end-to-end systems view of color imaging is presented that covers color management and color image processing for display, capture, and print. A key objective of the course is to highlight the interactions between the different modules in a color imaging system and to illustrate via examples how co-design has played an important role in the development of current digital color imaging devices and algorithms.

LEARNING OUTCOMESThis course will enable you to:• explain how color is perceived starting from a physical stimulus

and proceeding through the successive stages of the visual system by using the concepts of tristimulus values, opponent channel representation, and simultaneous contrast

• describe the common representations for color and spatial content in images and their interrelations with the characteristics of the human visual system

• list basic processing functions in a digital color imaging system, and schematically represent a system from input to output for common devices such as a digital cameras, displays, and color printers

• describe why color management is required and how it is performed• explain the role of color appearance transforms in image color

manipulations for gamut mapping and enhancement• explain how interactions between color and spatial dimensions

are commonly utilized in designing color imaging systems and algorithms

• cite examples of algorithms and systems that break traditional cost, performance, and functionality tradeoffs through system-wide optimization

INTENDED AUDIENCEThe short course is intended for engineers, scientists, students, and managers interested in acquiring a broad- system wide view of digital color imaging systems. Prior familiarity with basics of signal and image processing, in particular Fourier representations, is helpful although not essential for an intuitive understanding.

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 21

INSTRUCTORgaurav sharma has over two decades of experience in the design and optimization of color imaging systems and algorithms that spans employment at the Xerox Innovation Group and his current position as a Professor at the University of Rochester in the Departments of Electrical and Computer Engineering and Computer Science. Additionally, he has consulted for several companies on the development of new imaging systems and algorithms. He holds 49 issued patents and has authored over a 150 peer-reviewed publications. He is the editor of the “Digital Color Imaging Handbook” published by CRC Press and currently serves as the Editor-in-Chief for the SPIE/IS&T Journal of Electronic Imaging. Dr. Sharma is a fellow of IEEE, SPIE, and IS&T.

Digital Camera and Scanner Performance Evaluation: Standards and Measurement

SC807Course Level: intermediateCeU: 0.35 $300 Members | $355 Non-Members Usd tuesday 8:30 am to 12:30 pm

This is an updated course on imaging performance measurement methods for digital image capture devices and systems. We introduce several ISO measurement protocols for camera resolution, tone-transfer, noise, etc. We focus on the underlying sources of variability in system performance, measurement error, and how to manage this variability in working environments. The propagation of measurement variability will be described for several emerging standard methods for; image texture, distortion, color shading, flare and chromatic aberration. Using actual measurements we demonstrate how standards can be adapted to evaluate capture devices ranging from cell phone cameras to scientific detectors. New this year, we will be discussing the use of raw files to investigate intrinsic signal and noise characteristics of the image-capture path.

LEARNING OUTCOMESThis course will enable you to:• appreciate the difference between imaging performance and image

quality• interpret and apply the different flavors of each ISO performance

method• identify sources of system variability, and understand resulting

measurement error• distill information-rich ISO metrics into single measures for quality

assurance• adapt standard methods for use in factory testing• select software elements (with Matlab examples) for performance

evaluation programs• use raw images to investigate intrinsic/limiting imaging perfromance

INTENDED AUDIENCEAlthough technical in content, this course is intended for a wide audience: image scientists, quality engineers, and others evaluating digital camera and scanner performance. No background in imaging performance (MTF, etc.) evaluation will be assumed, although the course will provide previous attendees with an update and further insight for implementation. Detailed knowledge of Matlab is not needed, but exposure to similar software environments will be helpful.

INSTRUCTORPeter Burns is a consultant working in imaging system evaluation, modeling, and image processing. Previously he worked for Carestream Health, Xerox and Eastman Kodak. A frequent speaker at technical conferences, he has contributed to several imaging standards. He has taught several imaging courses: at Kodak, SPIE, and IS&T technical conferences, and at the Center for Imaging Science, RIT.donald Williams , founder of Image Science Associates, was with Kodak Research Laboratories. His work focuses on quantitative signal and noise performance metrics for digital capture imaging devices, and imaging fidelity issues. He co-leads the TC42 standardization efforts on digital print and film scanner resolution (ISO 16067-1, ISO 16067-2) scanner dynamic range (ISO 21550) and is the editor for the second edition to digital camera resolution (ISO 12233).

Perceptual Metrics for Image and Video Quality in a Broader Context: From Perceptual Transparency to Structural Equivalence

SC812Course Level: intermediateCeU: 0.35 $300 Members | $355 Non-Members Usd Wednesday 1:30 pm to 5:30 pm

We will examine objective criteria for the evaluation of image quality that are based on models of visual perception. Our primary emphasis will be on image fidelity, i.e., how close an image is to a given original or reference image, but we will broaden the scope of image fidelity to include structural equivalence. We will also discuss no-reference and limited-reference metrics. We will examine a variety of applications with special emphasis on image and video compression. We will examine near-threshold perceptual metrics, which explicitly account for human visual system (HVS) sensitivity to noise by estimating thresholds above which the distortion is just-noticeable, and supra-threshold metrics, which attempt to quantify visible distortions encountered in high compression applications or when there are losses due to channel conditions. We will also consider metrics for structural equivalence, whereby the original and the distorted image have visible differences but both look natural and are of equally high visual quality. We will also take a close look at procedures for evaluating the performance of quality metrics, including database design, models for generating realistic distortions for various applications, and subjective procedures for metric development and testing. Throughout the course we will discuss both the state of the art and directions for future research.Course topics include:• Applications: Image and video compression, restoration, retrieval,

graphics, etc.• Human visual system review• Near-threshold and supra-threshold perceptual quality metrics• Structural similarity metrics• Perceptual metrics for texture analysis and compression – structural

texture similarity metrics• No-reference and limited-reference metrics• Models for generating realistic distortions for different applications• Design of databases and subjective procedures for metric

development and testing• Metric performance comparisons, selection, and general use and

abuse• Embedded metric performance, e.g., for rate-distortion optimized

compression or restoration• Metrics for specific distortions, e.g., blocking and blurring, and for

specific attributes, e.g., contrast, roughness, and glossiness• Multimodal applications

LEARNING OUTCOMESThis course will enable you to:• gain a basic understanding of the properties of the human visual

system and how current applications (image and video compression, restoration, retrieval, etc.) that attempt to exploit these properties

• gain an operational understanding of existing perceptually-based and structural similarity metrics, the types of images/artifacts on which they work, and their failure modes

• review current distortion models for different applications, and how they can be used to modify or develop new metrics for specific contexts

• differentiate between sub-threshold and supra-threshold artifacts, the HVS responses to these two paradigms, and the differences in measuring that response

• establish criteria by which to select and interpret a particular metric for a particular application.

• evaluate the capabilities and limitations of full-reference, limited-reference, and no-reference metrics, and why each might be used in a particular application

short Courses

22 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

INTENDED AUDIENCEImage and video compression specialists who wish to gain an understanding of how performance can be quantified. Engineers and Scientists who wish to learn about objective image and video quality evaluation.Managers who wish to gain a solid overview of image and video quality evaluation. Students who wish to pursue a career in digital image processing. Intellectual Property and Patent Attorneys who wish to gain a more fundamental understanding of quality metrics and the underlying technologies. Government laboratory personnel who work in imaging.Prerequisites: a basic understanding of image compression algorithms, and a background in digital signal processing and basic statistics: frequency-based representations, filtering, distributions.

INSTRUCTORthrasyvoulos Pappas received the S.B., S.M., and Ph.D. degrees in electrical engineering and computer science from MIT in 1979, 1982, and 1987, respectively. From 1987 until 1999, he was a Member of the Technical Staff at Bell Laboratories, Murray Hill, NJ. He is currently a professor in the Department of Electrical and Computer Engineering at Northwestern University, which he joined in 1999. His research interests are in image and video quality and compression, image and video analysis, content-based retrieval, perceptual models for multimedia processing, model-based halftoning, and tactile and multimodal interfaces. Dr. Pappas has served as co-chair of the 2005 SPIE/IS&T Electronic Imaging Symposium, and since 1997 he has been co-chair of the SPIE/IS&T Conference on Human Vision and Electronic Imaging. He has also served as editor-in-chief for the IEEE Transactions on Image Processing from 2010 to 2012. Dr. Pappas is a Fellow of IEEE and SPIE.sheila Hemami received the B.S.E.E. degree from the University of Michigan in 1990, and the M.S.E.E. and Ph.D. degrees from Stanford University in 1992 and 1994, respectively. She was with Hewlett-Packard Laboratories in Palo Alto, California in 1994 and was with the School of Electrical Engineering at Cornell University from 1995-2013. She is currently Professor and Chair of the Department of Electrical & Computer Engineering at Northeastern University in Boston, MA. Dr. Hemami’s research interests broadly concern communication of visual information from the perspectives of both signal processing and psychophysics. She has held various technical leadership positions in the IEEE, served as editor-in-chief for the IEEE Transactions on Multimedia from 2008 to 2010, and was elected a Fellow of the IEEE in 2009 for her for contributions to robust and perceptual image and video communications.

Image Quality and Evaluation of Cameras In Mobile Devices

SC1058Course Level: intermediateCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

Digital and mobile imaging camera system performance is determined by a combination of sensor characteristics, lens characteristics, and image-processing algorithms. As pixel size decreases, sensitivity decreases and noise increases, requiring a more sophisticated noise-reduction algorithm to obtain good image quality. Furthermore, small pixels require high-resolution optics with low chromatic aberration and very small blur circles. Ultimately, there is a tradeoff between noise, resolution, sharpness, and the quality of an image.This short course provides an overview of “light in to byte out” issues associated with digital and mobile imaging cameras. The course covers, optics, sensors, image processing, and sources of noise in these cameras, algorithms to reduce it, and different methods of characterization. Although noise is typically measured as a standard deviation in a patch with uniform color, it does not always accurately represent human perception. Based on the “visual noise” algorithm described in ISO 15739, an improved approach for measuring noise as an image quality aspect will be demonstrated. The course shows a way to optimize image quality by balancing the tradeoff between noise and resolution. All methods discussed will use images as examples.

LEARNING OUTCOMESThis course will enable you to:• describe pixel technology and color filtering• describe illumination, photons, sensor and camera radiometry• select a sensor for a given application• describe and measure sensor performance metrics• describe and understand the optics of digital and mobile imaging

systems• examine the difficulties in minimizing sensor sizes• assess the need for per unit calibrations in digital still cameras and

mobile imaging devices• learn about noise, its sources, and methods of managing it• make noise and resolution measurements based on international

standards o EMVA 1288 o ISO 14524 (OECF)/ISO 15739 (Noise) o Visual Noise o ISO 12233 (Resolution)• assess influence of the image pipeline on noise• utilize today’s algorithms to reduce noise in images• measure noise based on human perception• optimize image quality by balancing noise reduction and resolution• compare hardware tradeoffs, noise reduction algorithms, and

settings for optimal image quality

INTENDED AUDIENCEAll people evaluating the image quality of digital cameras, mobile cameras, and scanners would benefit from participation. Technical staff of manufacturers, managers of digital imaging projects, as well as journalists and students studying image technology are among the intended audience.

INSTRUCTORKevin Matherson is a senior image scientist in the research and development lab of Hewlett-Packard’s Imaging and Printing Group and has worked in the field of digital imaging since 1985. He joined Hewlett Packard in 1996 and has participated in the development of all HP digital and mobile imaging cameras produced since that time. His primary research interests focus on noise characterization, optical system analysis, and the optimization of camera image quality. Dr. Matherson currently leads the camera characterization laboratory in Fort Collins and holds Masters and PhD degrees in Optical Sciences from the University of Arizona.Uwe artmann studied Photo Technology at the University of Applied Sciences in Cologne following an apprenticeship as a photographer, and finished with the German ‘Diploma Engineer’. He is now CTO at Image Engineering, an independent test lab for imaging devices and manufacturer of all kinds of test equipment for these devices. His special interest is the influence of noise reduction on image quality and MTF measurement in general.

High Dynamic Range Imaging: Sensors and Architectures

SC967Course Level: intermediateCeU: 0.65 $570 Members | $680 Non-Members Usd sunday 8:30 am to 5:30 pm

This course provides attendees with an intermediate knowledge of high dynamic range image sensors and techniques for industrial and non-industrial applications. The course describes various sensor and pixel architectures to achieve high dynamic range imaging as well as software approaches to make high dynamic range images out of lower dynamic range sensors or image sets. The course follows a mathematic approach to define the amount of information that can be extracted from the image for each of the methods described. Some methods for automatic control of exposure and dynamic range of image sensors and other issues like color and glare will be introduced.

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 23

LEARNING OUTCOMESThis course will enable you to:• describe various approaches to achieve high dynamic range imaging• predict the behavior of a given sensor or architecture on a scene• specify the sensor or system requirements for a high dynamic range

application• classify a high dynamic range application into one of several

standard types

INTENDED AUDIENCEThis material is intended for anyone who needs to learn more about quantitative side of high dynamic range imaging. Optical engineers, electronic engineers and scientists will find useful information for their next high dynamic range application.

INSTRUCTORarnaud darmont is owner and CEO of Aphesa, a company founded in 2008 and specialized in custom camera developments, image sensor consulting, the EMVA1288 standard and camera benchmarking. He holds a degree in Electronic Engineering from the University of Liège (Belgium). Prior to founding Aphesa, he worked for over 7 years in the field of CMOS image sensors and high dynamic range imaging.

COURSE PRICE INCLUDES the text High Dynamic Range Imaging: Sen-sors and Architectures (SPIE Press, 2012) by Arnaud Darmont.

HDR Imaging in Cameras, Displays and Human Vision

SC1097Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd Monday 8:30 am to 12:30 pm

High-dynamic range (HDR) imaging is a significant improvement over conventional imaging. After a description of the dynamic range problem in image acquisition, this course focuses on standard methods of creating and manipulating HDR images, replacing myths with measurements of scenes, camera images, and visual appearances. In particular, the course presents measurements about the limits of accurate camera acquisition and the usable range of light for displays of our vision system. Regarding our vision system, the course discusses the role of accurate vs. non-accurate luminance recording for the final appearance of a scene, presenting the quality and the characteristics of visual information actually available on the retina. It ends with a discussion of the principles of tone rendering and the role of spatial comparison.

LEARNING OUTCOMESThis course will enable you to:• explore the history of HDR imaging• describe dynamic range and quantization: the ‘salame’ metaphor• compare single and multiple-exposure for scene capture• measure optical limits in acquisition and visualization• discover relationship between HDR range and scene dependency ;

the effect of glare• explore the limits of our vision system on HDR• calculate retinal luminance• put in relationship the HDR images and the visual appearance• identify tone-rendering problems and spatial methods• verify the changes in color spaces due to dynamic range expansion

INTENDED AUDIENCEColor scientists, software and hardware engineers, photographers, cinematographers, production specialists, and students interested in using HDR images in real applications.

INSTRUCTORalessandro Rizzi has been researching in the field of digital imaging and vision since 1990. His main research topic is the use of color information in digital images with particular attention to color vision mechanisms. He is Associate professor at the Dept. of Computer Science at University of Milano, teaching Fundamentals of Digital Imaging, Multimedia Video, and Human-Computer Interaction. He is one of the founders of the Italian Color Group and member of several program committees of conferences related to color and digital imaging.John McCann received a degree in Biology from Harvard College in 1964. He worked in, and managed, the Vision Research Laboratory at Polaroid from 1961 to 1996. He has studied human color vision, digital image processing, large format instant photography, and the reproduction of fine art. His publications and patents have studied Retinex theory, color constancy, color from rod/cone interactions at low light levels, appearance with scattered light, and HDR imaging. He is a Fellow of the IS&T and the Optical Society of America (OSA). He is a past President of IS&T and the Artists Foundation, Boston. He is the IS&T/OSA 2002 Edwin H. Land Medalist, and IS&T 2005 Honorary Member.

Theory and Methods of Lightfield Photography

SC980Course Level: intermediateCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

Lightfield photography is based on capturing discrete representations of all light rays in a volume of 3D space. Since light rays are characterized with 2D position and 2D direction (relative to a plane of intersection), lightfield photography captures 4D data. In comparison, conventional photography captures 2D images. Multiplexing this 4D radiance data onto conventional 2D sensors demands sophisticated optics and imaging technology. Rending an image from the 4D lightfield is accomplished computationally based on creating 2D integral projections of the 4D radiance. Optical transformations can also be applied computationally, enabling effects such as computational focusing anywhere in space.This course presents a comprehensive development of lightfield photography, beginning with theoretical ray optics fundamentals and progressing through real-time GPU-based computational techniques. Although the material is mathematically rigorous, our goal is simplicity. Emphasizing fundamental underlying ideas leads to the development of surprisingly elegant analytical techniques. These techniques are in turn used to develop and characterize computational techniques, model lightfield cameras, and analyze resolution.The course also demonstrates practical approaches and engineering solutions. The course includes a hands-on demonstration of several working plenoptic cameras that implement different methods for radiance capture, including the micro-lens approach of Lippmann, the mask-enhanced “heterodyning” camera, the lens-prism camera, multispectral and polarization capture, and the plenoptic 2.0 camera. One section of the course is devoted specifically to the commercially available Lytro camera. Various computational techniques for processing captured data are demonstrated, including basic rendering, Ng’s Fourier slice algorithm, the heterodyned light-field approach for computational refocusing, glare reduction, super-resolution, artifact reduction, and others.

short Courses

24 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

LEARNING OUTCOMESThis course will enable you to:• formulate arbitrary lens systems in terms of matrix optics, i.e., to use

matrix operations to express ray propagation• formulate typical lightfield photography problems in terms of the

radiance in 4D ray space using ray propagation computations, enabling you to design and construct different plenoptic cameras both theoretically and as an engineering task

• classify plenoptic cameras into version 1.0 and 2.0 and analyze the reasons for the higher resolution of 2.0 cameras

• construct your own Plenoptic, 3D, HDR, multispectral or Superresolution cameras

• write GPU-based applications to perform lightfield rendering of the captured image in real time

• develop approaches to artifact reduction

INTENDED AUDIENCEThis course is intended for anyone interested in learning about lightfield photography. Prerequisites are basic familiarity with ray optics, image processing, linear algebra, and programming. Deeper involvement in one or several of those areas is a plus, but not required to understand the course.

INSTRUCTORtodor georgiev is a principal engineer at Qualcomm. With background in theoretical physics, he concentrates on applications of mathematical methods taken from physics to image processing. Todor was previously with Adobe Systems, where he authored the Photoshop Healing Brush (a tool on which Poisson image editing was based). He works on theoretical and practical ideas in optics and computational photography, including plenoptic cameras and radiance capture. He has a number of papers and patents in these and related areas.andrew Lumsdaine received his PhD degree in electrical engineering and computer science from the Massachusetts Institute of Technology in 1992. He is presently a professor of computer science at Indiana University, where he is also the director of the Center for Research in Extreme Scale Technologies. His research interests include computational science and engineering, parallel and distributed computing, programming languages, numerical analysis, and computational photography. He is a member of the IEEE, the IEEE Computer Society, the ACM, and SIAM.

Stereoscopic Display Application Issues

SC060Course Level: intermediateCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

When correctly implemented, stereoscopic 3D displays can provide significant benefits in many areas, including endoscopy and other medical imaging, teleoperated vehicles and telemanipulators, CAD, molecular modeling, 3D computer graphics, 3D visualization, photo interpretation, video-based training, and entertainment. This course conveys a concrete understanding of basic principles and pitfalls that should be considered when setting up stereoscopic systems and producing stereoscopic content. The course will demonstrate a range of stereoscopic hardware and 3D imaging & display principles, outline the key issues in an ortho-stereoscopic video display setup, and show 3D video from a wide variety of applied stereoscopic imaging systems.

LEARNING OUTCOMESThis course will enable you to:• list critical human factors guidelines for stereoscopic display

configuration and implementation • calculate optimal camera focal length, separation, display size, and

viewing distance to achieve a desired level of depth acuity • examine comfort limits for focus/fixation mismatch and on-

screen parallax values as a function of focal length, separation, convergence, display size, and viewing-distance factors

• set up a large-screen stereo display system using AV equipment readily available at most conference sites, for 3D stills and for full-motion 3D video

• rank the often-overlooked side-benefits of stereoscopic displays that should be included in a cost/benefit analysis for proposed 3D applications

• explain common pitfalls in designing tests to compare 2D vs. 3D displays

• calculate and demonstrate the distortions in perceived 3D space due to camera and display parameters

• design and set up an ortho-stereoscopic 3D imaging/display system • understand the projective geometry involved in stereoscopic

modeling • determine the problems, and the solutions, for converting

stereoscopic video across video standards such as NTSC and PAL • work with stereoscopic 3D video and stills -using analog and digital

methods of capture/filming, encoding, storage, format conversion, display, and publishing

• describe the trade-offs among currently available stereoscopic display system technologies and determine which will best match a particular application

• understand existing and developing stereoscopic standards

INTENDED AUDIENCEThis course is designed for engineers, scientists, and program managers who are using, or considering using, stereoscopic 3D displays in their applications. The solid background in stereoscopic system fundamentals, along with many examples of advanced 3D display applications, makes this course highly useful both for those who are new to stereoscopic 3D and also for those who want to advance their current understanding and utilization of stereoscopic systems.

INSTRUCTORJohn Merritt is a 3D display systems consultant at The Merritt Group, Williamsburg, MA, USA with more than 25 years experience in the design and human-factors evaluation of stereoscopic video displays for telepresence and telerobotics, off-road mobility, unmanned vehicles, night vision devices, photo interpretation, scientific visualization, and medical imaging.andrew Woods is a research engineer at Curtin University’s Centre for Marine Science and Technology in Perth, Western Australia. He has over 20 years of experience working on the design, application, and evaluation of stereoscopic technologies for industrial and entertainment applications.

3D Imaging

SC927Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd tuesday 8:30 am to 12:30 pm

The purpose of this course is to introduce algorithms for 3D structure inference from 2D images. In many applications, inferring 3D structure from 2D images can provide crucial sensing information. The course will begin by reviewing geometric image formation and mathematical concepts that are used to describe it, and then move to discuss algorithms for 3D model reconstruction.The problem of 3D model reconstruction is an inverse problem in which we need to infer 3D information based on incomplete (2D) observations. We will discuss reconstruction algorithms which utilize information from multiple views. Reconstruction requires the knowledge of some intrinsic and extrinsic camera parameters, and the establishment of correspondence between views. We will discuss algorithms for determining camera parameters (camera calibration) and for obtaining correspondence using epipolar constraints between views. The course will also introduce relevant 3D imaging software components available through the industry standard OpenCV library.

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 25

LEARNING OUTCOMESThis course will enable you to:• describe fundamental concepts in 3D imaging• develop algorithms for 3D model reconstruction from 2D images• incorporate camera calibration into your reconstructions• classify the limitations of reconstruction techniques• use industry standard tools for developing 3D imaging applications

INTENDED AUDIENCEEngineers, researchers, and software developers, who develop imaging applications and/or use camera sensors for inspection, control, and analysis. The course assumes basic working knowledge concerning matrices and vectors.

INSTRUCTORgady agam is an Associate Professor of Computer Science at the Illinois Institute of Technology. He is the director of the Visual Computing Lab at IIT which focuses on imaging, geometric modeling, and graphics applications. He received his PhD degree from Ben-Gurion University in 1999.

Perception, Cognition, and Next Generation Imaging

SC969Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 8:30 am to 12:30 pm

The world of electronic imaging is an explosion of hardware and software technologies, used in a variety of applications, in a wide range of domains. These technologies provide visual, auditory and tactile information to human observers, whose job it is to make decisions and solve problems. In this course, we will study fundamentals in human perception and cognition, and see how these principles can guide the design of systems that enhance human performance. We will study examples in display technology, image quality, visualization, image search, visual monitoring and haptics, and students will be encouraged to bring forward ongoing problems of interest to them.

LEARNING OUTCOMESThis course will enable you to:• describe basic principles of spatial, temporal, and color processing

by the human visual system, and know where to go for deeper insight

• explore basic cognitive processes, including visual attention and semantics

• develop skills in applying knowledge about human perception and cognition to engineering applications

INTENDED AUDIENCEScientists, engineers, technicians, or managers who are involved in the design, testing or evaluation of electronic imaging systems. Business managers responsible for innovation and new product development. Anyone interested in human perception and the evolution of electronic imaging applications.

INSTRUCTORBernice Rogowitz founded and co-chairs the SPIE/IS&T Conference on Human Vision and Electronic Imaging (HVEI) which is a multi-disciplinary forum for research on perceptual and cognitive issues in imaging systems. Dr. Rogowitz received her PhD from Columbia University in visual psychophysics, worked as a researcher and research manager at the IBM T.J. Watson Research Center for over 20 years, and is currently a consultant in vision, visual analysis and sensory interfaces. She has published over 60 technical papers and has over 12 patents on perceptually-based approaches to visualization, display technology, semantic image search, color, social networking, surveillance, haptic interfaces. She is a Fellow of the SPIE and the IS&T.

Joint Design of Optics and Image Processing for Imaging Systems

SC965Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd Monday 1:30 pm to 5:30 pm

For centuries, optical imaging system design centered on exploiting the laws of the physics of light and materials (glass, plastic, reflective metal, ...) to form high-quality (sharp, high-contrast, undistorted, ...) images that “looked good.” In the past several decades, the optical images produced by such systems have been ever more commonly sensed by digital detectors and the image imperfections corrected in software. The new era of electro-optical imaging offers a more fundamental revision to this paradigm, however: now the optics and image processing can be designed jointly to optimize an end-to-end digital merit function without regard to the traditional quality of the intermediate optical image. Many principles and guidelines from the optics-only era are counterproductive in the new era of electro-optical imaging and must be replaced by principles grounded on both the physics of photons and the information of bits.This short course will describe the theoretical and algorithmic foundations of new methods of jointly designing the optics and image processing of electro-optical imaging systems. The course will focus on the new concepts and approaches rather than commercial tools.

LEARNING OUTCOMESThis course will enable you to:• describe the basics of information theory• characterize electro-optical systems using linear systems theory• compute a predicted mean-squared error merit function• characterize the spatial statistics of sources• implement a Wiener filter• implement spatial convolution and digital filtering• make the distinction between traditional optics-only merit functions

and end-to-end digital merit functions• perform point-spread function engineering• become aware of the image processing implications of various

optical aberrations• describe wavefront coding and cubic phase plates• utilize the power of spherical coding• compare super-resolution algorithms and multi-aperture image

synthesizing systems• simulate the manufacturability of jointly designed imaging systems• evaluate new methods of electro-optical compensation

INTENDED AUDIENCEOptical designers familiar with system characterization (f#, depth of field, numerical aperture, point spread functions, modulation transfer functions, ...) and image processing experts familiar with basic operations (convolution, digital sharpening, information theory, ...).

INSTRUCTORdavid stork is Distinguished Research Scientist and Research Director at Rambus Labs, and a Fellow of the International Association for Pattern Recognition. He holds 40 US patents and has written nearly 200 technical publications including eight books or proceedings volumes such as Seeing the Light, Pattern Classification (2nd ed.) and HAL’s Legacy. He has given over 230 technical presentations on computer image analysis of art in 19 countries.

short Courses

26 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

Image Capture

Camera Characterization and Camera Models New

SC1157Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

Image Quality depends not only on the camera components, but also on lighting, photographer skills, picture content, viewing conditions and to some Image Quality depends not only on the camera components, but also on lighting, photographer skills, picture content, viewing conditions and to some extent on the viewer. While measuring or predicting a camera image quality as perceived by users can be an overwhelming task, many camera attributes can be accurately characterized with objective measurement methodologies.This course provides an insight on camera models, examining the mathematical models of the three main components of a camera (optics, sensor and ISP) and their interactions as a system (camera) or subsystem (camera at the raw level). The course describes methodologies to characterize the camera as a system or subsystem (modeled from the individual component mathematical models), including lab equipment, lighting systems, measurements devices, charts, protocols and software algorithms. Attributes to be discussed include exposure, color response, sharpness, shading, chromatic aberrations, noise, dynamic range, exposure time, rolling shutter, focusing system, and image stabilization. The course will also address aspects that specifically affect video capture, such as video stabilization, video codec, and temporal noise.The course “SC1049 Benchmarking Image Quality of Still and Video Imaging Systems,” describing perceptual models and subjective measurements, complements the treatment of camera models and objective measurements provided here.

LEARNING OUTCOMESThis course will enable you to:• build up and operate a camera characterization lab• master camera characterization protocols• understand camera models• define test plans• compare cameras as a system (end picture), subsystem (raw) or

component level (optics, sensor, ISP)• define data sets for benchmarks

INTENDED AUDIENCEImage scientists, camera designers.

INSTRUCTORJonathan Phillips is a senior image quality scientist in the camera group at NVIDIA. His involvement in the imaging industry spans over 20 years, including two decades at Eastman Kodak Company. His focus has been on photographic quality, with an emphasis on psychophysical testing for both product development and fundamental perceptual studies. His broad experience has included image quality work with capture, display, and print technologies. He received the 2011 I3A Achievement Award for his work on camera phone image quality and headed up the 2012 revision of ISO 20462 - Psychophysical experimental methods for estimating image quality - Part 3: Quality ruler method. He completed his graduate work in color science in the Center for Imaging Science at Rochester Institute of Technology and his chemistry undergraduate at Wheaton College (IL).Harvey (Hervé) Hornung is Camera Characterization Guru at Marvell Semiconductor Inc. His main skill is camera objective characterization and calibration. He worked on a camera array at Pelican Imaging for 2 years and worked at DxO Labs for 8 years as a technical leader in the Image Quality Evaluation business unit, including the most comprehensive objective image quality evaluation product DxO Analyzer and the famous website DxOMark. Harvey has been active in computer graphics and image processing for 20 years and teaches camera characterization and

benchmarking at different conferences.Hugh denman is a video processing and quality specialist at Google, involved in video quality assessment with YouTube and camera quality assessment for Google Chrome devices. Hugh was previously a founding engineer with Green Parrot Pictures, a video algorithms boutiquebased in Ireland and acquired by Google in 2011. While at Google, he has consulted on camera quality assessment with numerous sensor, ISP, and module vendors, and co-ordinates the Google Chrome OS image quality specification.

Benchmarking image Quality of still and Video imaging systems

SC1049Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd Monday 8:30 am to 5:30 pm

Because image quality is multi-faceted, generating a concise and relevant evaluative summary of photographic systems can be challenging. Indeed, benchmarking the image quality of still and video imaging systems requires that the assessor understands not only the capture device itself, but also the imaging applications for the system.This course explains how objective metrics and subjective methodologies are used to benchmark image quality of photographic still image and video capture devices. The course will go through key image quality attributes and the flaws that degrade those attributes, including causes and consequences of the flaws on perceived quality. Content will describe various subjective evaluation methodologies as well as objective measurement methodologies relying on existing standards from ISO, IEEE/CPIQ, ITU and beyond. Because imaging systems are intended for visual purposes, emphasis will be on the value of using objective metrics which are perceptually correlated and generating benchmark data from the combination of objective and subjective metrics.The course “SC1157 Camera Characterization and Camera Models,” describing camera models and objective measurements, complements the treatment of perceptual models and subjective measurements provided here.

LEARNING OUTCOMESThis course will enable you to:• summarize the overall image quality of a camera• identify defects that degrade image quality in natural images and

what component of the camera should/could be improved for better image quality

• evaluate the impact various output use cases have on overall image quality

• define subjective test plans and protocols• compare the image quality of a set of cameras• set up benchmarking protocols depending on use cases• build up a subjective image quality lab

INTENDED AUDIENCEImage scientists, engineers, or managers who wish to learn more about image quality and how to evaluate still and video cameras for various applications. A good understanding of imaging and how a camera works is assumed.

INSTRUCTORJonathan Phillips is a senior image quality scientist in the camera group at NVIDIA. His involvement in the imaging industry spans over 20 years, including two decades at Eastman Kodak Company. His focus has been on photographic quality, with an emphasis on psychophysical testing for both product development and fundamental perceptual studies. His broad experience has included image quality work with capture, display, and print technologies. He received the 2011 I3A Achievement Award for his work on camera phone image quality and headed up the 2012 revision of ISO 20462 - Psychophysical experimental methods for estimating image quality - Part 3: Quality ruler method. He completed his graduate work in color science in the Center for Imaging Science at Rochester Institute

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 27

of Technology and his chemistry undergraduate at Wheaton College (IL).Harvey (Hervé) Hornung is Camera Characterization Guru at Marvell Semiconductor Inc. His main skill is camera objective characterization and calibration. He worked on a camera array at Pelican Imaging for 2 years and worked at DxO Labs for 8 years as a technical leader in the Image Quality Evaluation business unit, including the most comprehensive objective image quality evaluation product DxO Analyzer and the famous website DxOMark. Harvey has been active in computer graphics and image processing for 20 years and teaches camera characterization and benchmarking at different conferences.Hugh denman is a video processing and quality specialist at Google, involved in video quality assessment with YouTube and camera quality assessment for Google Chrome devices. Hugh was previously a founding engineer with Green Parrot Pictures, a video algorithms boutique based in Ireland and acquired by Google in 2011. While at Google, he has consulted on camera quality assessment with numerous sensor, ISP, and module vendors, and co-ordinates the Google Chrome OS image quality specification.

Theory and Methods of Lightfield Photography

SC980Course Level: intermediateCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

Lightfield photography is based on capturing discrete representations of all light rays in a volume of 3D space. Since light rays are characterized with 2D position and 2D direction (relative to a plane of intersection), lightfield photography captures 4D data. In comparison, conventional photography captures 2D images. Multiplexing this 4D radiance data onto conventional 2D sensors demands sophisticated optics and imaging technology. Rending an image from the 4D lightfield is accomplished computationally based on creating 2D integral projections of the 4D radiance. Optical transformations can also be applied computationally, enabling effects such as computational focusing anywhere in space.This course presents a comprehensive development of lightfield photography, beginning with theoretical ray optics fundamentals and progressing through real-time GPU-based computational techniques. Although the material is mathematically rigorous, our goal is simplicity. Emphasizing fundamental underlying ideas leads to the development of surprisingly elegant analytical techniques. These techniques are in turn used to develop and characterize computational techniques, model lightfield cameras, and analyze resolution.The course also demonstrates practical approaches and engineering solutions. The course includes a hands-on demonstration of several working plenoptic cameras that implement different methods for radiance capture, including the micro-lens approach of Lippmann, the mask-enhanced “heterodyning” camera, the lens-prism camera, multispectral and polarization capture, and the plenoptic 2.0 camera. One section of the course is devoted specifically to the commercially available Lytro camera. Various computational techniques for processing captured data are demonstrated, including basic rendering, Ng’s Fourier slice algorithm, the heterodyned light-field approach for computational refocusing, glare reduction, super-resolution, artifact reduction, and others.

LEARNING OUTCOMESThis course will enable you to:• formulate arbitrary lens systems in terms of matrix optics, i.e., to use

matrix operations to express ray propagation• formulate typical lightfield photography problems in terms of the

radiance in 4D ray space using ray propagation computations, enabling you to design and construct different plenoptic cameras both theoretically and as an engineering task

• classify plenoptic cameras into version 1.0 and 2.0 and analyze the reasons for the higher resolution of 2.0 cameras

• construct your own Plenoptic, 3D, HDR, multispectral or Superresolution cameras

• write GPU-based applications to perform lightfield rendering of the captured image in real time

• develop approaches to artifact reduction

INTENDED AUDIENCEThis course is intended for anyone interested in learning about lightfield photography. Prerequisites are basic familiarity with ray optics, image processing, linear algebra, and programming. Deeper involvement in one or several of those areas is a plus, but not required to understand the course.

INSTRUCTORtodor georgiev is a principal engineer at Qualcomm. With background in theoretical physics, he concentrates on applications of mathematical methods taken from physics to image processing. Todor was previously with Adobe Systems, where he authored the Photoshop Healing Brush (a tool on which Poisson image editing was based). He works on theoretical and practical ideas in optics and computational photography, including plenoptic cameras and radiance capture. He has a number of papers and patents in these and related areas.andrew Lumsdaine received his PhD degree in electrical engineering and computer science from the Massachusetts Institute of Technology in 1992. He is presently a professor of computer science at Indiana University, where he is also the director of the Center for Research in Extreme Scale Technologies. His research interests include computational science and engineering, parallel and distributed computing, programming languages, numerical analysis, and computational photography. He is a member of the IEEE, the IEEE Computer Society, the ACM, and SIAM.

High Dynamic Range Imaging: Sensors and Architectures

SC967Course Level: intermediateCeU: 0.65 $570 Members | $680 Non-Members Usd sunday 8:30 am to 5:30 pm

This course provides attendees with an intermediate knowledge of high dynamic range image sensors and techniques for industrial and non-industrial applications. The course describes various sensor and pixel architectures to achieve high dynamic range imaging as well as software approaches to make high dynamic range images out of lower dynamic range sensors or image sets. The course follows a mathematic approach to define the amount of information that can be extracted from the image for each of the methods described. Some methods for automatic control of exposure and dynamic range of image sensors and other issues like color and glare will be introduced.

LEARNING OUTCOMESThis course will enable you to:• describe various approaches to achieve high dynamic range imaging• predict the behavior of a given sensor or architecture on a scene• specify the sensor or system requirements for a high dynamic range

application• classify a high dynamic range application into one of several

standard types

INTENDED AUDIENCEThis material is intended for anyone who needs to learn more about quantitative side of high dynamic range imaging. Optical engineers, electronic engineers and scientists will find useful information for their next high dynamic range application.

INSTRUCTORarnaud darmont is owner and CEO of Aphesa, a company founded in 2008 and specialized in custom camera developments, image sensor consulting, the EMVA1288 standard and camera benchmarking. He holds a degree in Electronic Engineering from the University of Liège (Belgium). Prior to founding Aphesa, he worked for over 7 years in the field of CMOS image sensors and high dynamic range imaging.

COURSE PRICE INCLUDES the text High Dynamic Range Imaging: Sen-sors and Architectures (SPIE Press, 2012) by Arnaud Darmont.

short Courses

28 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

HDR Imaging in Cameras, Displays and Human Vision

SC1097Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd Monday 8:30 am to 12:30 pm

High-dynamic range (HDR) imaging is a significant improvement over conventional imaging. After a description of the dynamic range problem in image acquisition, this course focuses on standard methods of creating and manipulating HDR images, replacing myths with measurements of scenes, camera images, and visual appearances. In particular, the course presents measurements about the limits of accurate camera acquisition and the usable range of light for displays of our vision system. Regarding our vision system, the course discusses the role of accurate vs. non-accurate luminance recording for the final appearance of a scene, presenting the quality and the characteristics of visual information actually available on the retina. It ends with a discussion of the principles of tone rendering and the role of spatial comparison.

LEARNING OUTCOMESThis course will enable you to:• explore the history of HDR imaging• describe dynamic range and quantization: the ‘salame’ metaphor• compare single and multiple-exposure for scene capture• measure optical limits in acquisition and visualization• discover relationship between HDR range and scene dependency ;

the effect of glare• explore the limits of our vision system on HDR• calculate retinal luminance• put in relationship the HDR images and the visual appearance• identify tone-rendering problems and spatial methods• verify the changes in color spaces due to dynamic range expansion

INTENDED AUDIENCEColor scientists, software and hardware engineers, photographers, cinematographers, production specialists, and students interested in using HDR images in real applications.

INSTRUCTORalessandro Rizzi has been researching in the field of digital imaging and vision since 1990. His main research topic is the use of color information in digital images with particular attention to color vision mechanisms. He is Associate professor at the Dept. of Computer Science at University of Milano, teaching Fundamentals of Digital Imaging, Multimedia Video, and Human-Computer Interaction. He is one of the founders of the Italian Color Group and member of several program committees of conferences related to color and digital imaging.John McCann received a degree in Biology from Harvard College in 1964. He worked in, and managed, the Vision Research Laboratory at Polaroid from 1961 to 1996. He has studied human color vision, digital image processing, large format instant photography, and the reproduction of fine art. His publications and patents have studied Retinex theory, color constancy, color from rod/cone interactions at low light levels, appearance with scattered light, and HDR imaging. He is a Fellow of the IS&T and the Optical Society of America (OSA). He is a past President of IS&T and the Artists Foundation, Boston. He is the IS&T/OSA 2002 Edwin H. Land Medalist, and IS&T 2005 Honorary Member.

Joint Design of Optics and Image Processing for Imaging Systems

SC965Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd Monday 1:30 pm to 5:30 pm

For centuries, optical imaging system design centered on exploiting the laws of the physics of light and materials (glass, plastic, reflective metal, ...) to form high-quality (sharp, high-contrast, undistorted, ...) images that “looked good.” In the past several decades, the optical images produced by such systems have been ever more commonly sensed by digital detectors and the image imperfections corrected in software. The new era of electro-optical imaging offers a more fundamental revision to this paradigm, however: now the optics and image processing can be designed jointly to optimize an end-to-end digital merit function without regard to the traditional quality of the intermediate optical image. Many principles and guidelines from the optics-only era are counterproductive in the new era of electro-optical imaging and must be replaced by principles grounded on both the physics of photons and the information of bits.This short course will describe the theoretical and algorithmic foundations of new methods of jointly designing the optics and image processing of electro-optical imaging systems. The course will focus on the new concepts and approaches rather than commercial tools.

LEARNING OUTCOMESThis course will enable you to:• describe the basics of information theory• characterize electro-optical systems using linear systems theory• compute a predicted mean-squared error merit function• characterize the spatial statistics of sources• implement a Wiener filter• implement spatial convolution and digital filtering• make the distinction between traditional optics-only merit functions

and end-to-end digital merit functions• perform point-spread function engineering• become aware of the image processing implications of various

optical aberrations• describe wavefront coding and cubic phase plates• utilize the power of spherical coding• compare super-resolution algorithms and multi-aperture image

synthesizing systems• simulate the manufacturability of jointly designed imaging systems• evaluate new methods of electro-optical compensation

INTENDED AUDIENCEOptical designers familiar with system characterization (f#, depth of field, numerical aperture, point spread functions, modulation transfer functions, ...) and image processing experts familiar with basic operations (convolution, digital sharpening, information theory, ...).

INSTRUCTORdavid stork is Distinguished Research Scientist and Research Director at Rambus Labs, and a Fellow of the International Association for Pattern Recognition. He holds 40 US patents and has written nearly 200 technical publications including eight books or proceedings volumes such as Seeing the Light, Pattern Classification (2nd ed.) and HAL’s Legacy. He has given over 230 technical presentations on computer image analysis of art in 19 countries.

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 29

Digital Camera and Scanner Performance Evaluation: Standards and Measurement

SC807Course Level: intermediateCeU: 0.35 $300 Members | $355 Non-Members Usd tuesday 8:30 am to 12:30 pm

This is an updated course on imaging performance measurement methods for digital image capture devices and systems. We introduce several ISO measurement protocols for camera resolution, tone-transfer, noise, etc. We focus on the underlying sources of variability in system performance, measurement error, and how to manage this variability in working environments. The propagation of measurement variability will be described for several emerging standard methods for; image texture, distortion, color shading, flare and chromatic aberration. Using actual measurements we demonstrate how standards can be adapted to evaluate capture devices ranging from cell phone cameras to scientific detectors. New this year, we will be discussing the use of raw files to investigate intrinsic signal and noise characteristics of the image-capture path.

LEARNING OUTCOMESThis course will enable you to:• appreciate the difference between imaging performance and image

quality• interpret and apply the different flavors of each ISO performance

method• identify sources of system variability, and understand resulting

measurement error• distill information-rich ISO metrics into single measures for quality

assurance• adapt standard methods for use in factory testing• select software elements (with Matlab examples) for performance

evaluation programs• use raw images to investigate intrinsic/limiting imaging perfromance

INTENDED AUDIENCEAlthough technical in content, this course is intended for a wide audience: image scientists, quality engineers, and others evaluating digital camera and scanner performance. No background in imaging performance (MTF, etc.) evaluation will be assumed, although the course will provide previous attendees with an update and further insight for implementation. Detailed knowledge of Matlab is not needed, but exposure to similar software environments will be helpful.

INSTRUCTORPeter Burns is a consultant working in imaging system evaluation, modeling, and image processing. Previously he worked for Carestream Health, Xerox and Eastman Kodak. A frequent speaker at technical conferences, he has contributed to several imaging standards. He has taught several imaging courses: at Kodak, SPIE, and IS&T technical conferences, and at the Center for Imaging Science, RIT.donald Williams , founder of Image Science Associates, was with Kodak Research Laboratories. His work focuses on quantitative signal and noise performance metrics for digital capture imaging devices, and imaging fidelity issues. He co-leads the TC42 standardization efforts on digital print and film scanner resolution (ISO 16067-1, ISO 16067-2) scanner dynamic range (ISO 21550) and is the editor for the second edition to digital camera resolution (ISO 12233).

Recent Trends in Imaging Devices

SC1048Course Level: intermediateCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 1:30 pm to 5:30 pm

In the last decade, consumer imaging devices such as camcorders, digital cameras, smartphones and tablets have been dramatically diffused. The increasing of their computational performances combined with an higher

storage capability allowed to design and implement advanced imaging systems that can automatically process visual data with the purpose of understanding the content of the observed scenes. In the next years, we will be conquered by wearable visual devices acquiring, streaming and logging video of our daily life. This new exciting imaging domain, in which the scene is observed from a first person point of view, poses new challenges to the research community, as well as gives the opportunity to build new applications. Many results in image processing and computer vision related to motion analysis, tracking, scene and object recognition and video summarization, have to be re-defined and re-designed by considering the emerging wearable imaging domain.In the first part of this course we will review the main algorithms involved in the single-sensor imaging devices pipeline describing also some advanced applications. In the second part of the course we will give an overview of the recent trends of imaging devices considering the wearable domain. Challenges and applications will be discussed considering the state-of-the-art literature.

LEARNING OUTCOMESThis course will enable you to:• describe operating single-sensor imaging systems for commercial

and scientific imaging applications• explain how imaging data are acquired and processed (demosaicing,

color calibration, etc.)• list specifications and requirements to select a specific algorithm for

your imaging application• recognize performance differences among imaging pipeline

technologies• become familiar with current and future imaging technologies,

challenges and applications

INTENDED AUDIENCEThis course is intended for those with a general computing background, and is interested in the topic of image processing and computer vision. Students, researchers, and practicing engineers should all be able to benefit from the general overview of the field and the introduction of the most recent advances of the technology.

INSTRUCTORsebastiano Battiato received his Ph.D. in computer science and applied mathematics in 1999, and led the “Imaging” team at STMicroelectronics in Catania through 2003. He joined the Department of Mathematics and Computer Science at the University of Catania as assistant professor in 2004 and became associate professor in 2011. His research interests include image enhancement and processing, image coding, camera imaging technology and multimedia forensics. He has published more than 90 papers in international journals, conference proceedings and book chapters. He is a co-inventor of about 15 international patents, reviewer for several international journals, and has been regularly a member of numerous international conference committees. He is director (and co-founder) of the International Computer Vision Summer School (ICVSS), Sicily, Italy. He is a senior member of the IEEE.giovanni Farinella received the M.S. degree in Computer Science (egregia cum laude) from the University of Catania, Italy, in 2004, and the Ph.D. degree in computer science in 2008. He joined the Image Processing Laboratory (IPLAB) at the Department of Mathematics and Computer Science, University of Catania, in 2008, as a Contract Researcher. He is an Adjunct Professor of Computer Science at the University of Catania (since 2008) and a Contract Professor of Computer Vision at the Academy of Arts of Catania (since 2004). His research interests lie in the fields of computer vision, pattern recognition and machine learning. He has edited four volumes and coauthored more than 60 papers in international journals, conference proceedings and book chapters. He is a co-inventor of four international patents. He serves as a reviewer and on the programme committee for major international journals and international conferences. He founded (in 2006) and currently directs the International Computer Vision Summer School (ICVSS).

short Courses

30 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

Image Quality and Evaluation of Cameras In Mobile Devices

SC1058Course Level: intermediateCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

Digital and mobile imaging camera system performance is determined by a combination of sensor characteristics, lens characteristics, and image-processing algorithms. As pixel size decreases, sensitivity decreases and noise increases, requiring a more sophisticated noise-reduction algorithm to obtain good image quality. Furthermore, small pixels require high-resolution optics with low chromatic aberration and very small blur circles. Ultimately, there is a tradeoff between noise, resolution, sharpness, and the quality of an image.This short course provides an overview of “light in to byte out” issues associated with digital and mobile imaging cameras. The course covers, optics, sensors, image processing, and sources of noise in these cameras, algorithms to reduce it, and different methods of characterization. Although noise is typically measured as a standard deviation in a patch with uniform color, it does not always accurately represent human perception. Based on the “visual noise” algorithm described in ISO 15739, an improved approach for measuring noise as an image quality aspect will be demonstrated. The course shows a way to optimize image quality by balancing the tradeoff between noise and resolution. All methods discussed will use images as examples.

LEARNING OUTCOMESThis course will enable you to:• describe pixel technology and color filtering• describe illumination, photons, sensor and camera radiometry• select a sensor for a given application• describe and measure sensor performance metrics• describe and understand the optics of digital and mobile imaging

systems• examine the difficulties in minimizing sensor sizes• assess the need for per unit calibrations in digital still cameras and

mobile imaging devices• learn about noise, its sources, and methods of managing it• make noise and resolution measurements based on international

standards o EMVA 1288 o ISO 14524 (OECF)/ISO 15739 (Noise) o Visual Noise o ISO 12233 (Resolution)• assess influence of the image pipeline on noise• utilize today’s algorithms to reduce noise in images• measure noise based on human perception• optimize image quality by balancing noise reduction and resolution• compare hardware tradeoffs, noise reduction algorithms, and

settings for optimal image quality

INTENDED AUDIENCEAll people evaluating the image quality of digital cameras, mobile cameras, and scanners would benefit from participation. Technical staff of manufacturers, managers of digital imaging projects, as well as journalists and students studying image technology are among the intended audience.

INSTRUCTORKevin Matherson is a senior image scientist in the research and development lab of Hewlett-Packard’s Imaging and Printing Group and has worked in the field of digital imaging since 1985. He joined Hewlett Packard in 1996 and has participated in the development of all HP digital and mobile imaging cameras produced since that time. His primary research interests focus on noise characterization, optical system analysis, and the optimization of camera image quality. Dr. Matherson currently leads the camera characterization laboratory in Fort Collins and holds Masters and PhD degrees in Optical Sciences from the University of Arizona.

Uwe artmann studied Photo Technology at the University of Applied Sciences in Cologne following an apprenticeship as a photographer, and finished with the German ‘Diploma Engineer’. He is now CTO at Image Engineering, an independent test lab for imaging devices and manufacturer of all kinds of test equipment for these devices. His special interest is the influence of noise reduction on image quality and MTF measurement in general.

Perceptual Metrics for Image and Video Quality in a Broader Context: From Perceptual Transparency to Structural Equivalence

SC812Course Level: intermediateCeU: 0.35 $300 Members | $355 Non-Members Usd Wednesday 1:30 pm to 5:30 pm

We will examine objective criteria for the evaluation of image quality that are based on models of visual perception. Our primary emphasis will be on image fidelity, i.e., how close an image is to a given original or reference image, but we will broaden the scope of image fidelity to include structural equivalence. We will also discuss no-reference and limited-reference metrics. We will examine a variety of applications with special emphasis on image and video compression. We will examine near-threshold perceptual metrics, which explicitly account for human visual system (HVS) sensitivity to noise by estimating thresholds above which the distortion is just-noticeable, and supra-threshold metrics, which attempt to quantify visible distortions encountered in high compression applications or when there are losses due to channel conditions. We will also consider metrics for structural equivalence, whereby the original and the distorted image have visible differences but both look natural and are of equally high visual quality. We will also take a close look at procedures for evaluating the performance of quality metrics, including database design, models for generating realistic distortions for various applications, and subjective procedures for metric development and testing. Throughout the course we will discuss both the state of the art and directions for future research.Course topics include:• Applications: Image and video compression, restoration, retrieval,

graphics, etc.• Human visual system review• Near-threshold and supra-threshold perceptual quality metrics• Structural similarity metrics• Perceptual metrics for texture analysis and compression – structural

texture similarity metrics• No-reference and limited-reference metrics• Models for generating realistic distortions for different applications• Design of databases and subjective procedures for metric

development and testing• Metric performance comparisons, selection, and general use and

abuse• Embedded metric performance, e.g., for rate-distortion optimized

compression or restoration• Metrics for specific distortions, e.g., blocking and blurring, and for

specific attributes, e.g., contrast, roughness, and glossiness• Multimodal applications

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 31

LEARNING OUTCOMESThis course will enable you to:• gain a basic understanding of the properties of the human visual

system and how current applications (image and video compression, restoration, retrieval, etc.) that attempt to exploit these properties

• gain an operational understanding of existing perceptually-based and structural similarity metrics, the types of images/artifacts on which they work, and their failure modes

• review current distortion models for different applications, and how they can be used to modify or develop new metrics for specific contexts

• differentiate between sub-threshold and supra-threshold artifacts, the HVS responses to these two paradigms, and the differences in measuring that response

• establish criteria by which to select and interpret a particular metric for a particular application.

• evaluate the capabilities and limitations of full-reference, limited-reference, and no-reference metrics, and why each might be used in a particular application

INTENDED AUDIENCEImage and video compression specialists who wish to gain an understanding of how performance can be quantified. Engineers and Scientists who wish to learn about objective image and video quality evaluation.Managers who wish to gain a solid overview of image and video quality evaluation. Students who wish to pursue a career in digital image processing. Intellectual Property and Patent Attorneys who wish to gain a more fundamental understanding of quality metrics and the underlying technologies. Government laboratory personnel who work in imaging.Prerequisites: a basic understanding of image compression algorithms, and a background in digital signal processing and basic statistics: frequency-based representations, filtering, distributions.

INSTRUCTORthrasyvoulos Pappas received the S.B., S.M., and Ph.D. degrees in electrical engineering and computer science from MIT in 1979, 1982, and 1987, respectively. From 1987 until 1999, he was a Member of the Technical Staff at Bell Laboratories, Murray Hill, NJ. He is currently a professor in the Department of Electrical and Computer Engineering at Northwestern University, which he joined in 1999. His research interests are in image and video quality and compression, image and video analysis, content-based retrieval, perceptual models for multimedia processing, model-based halftoning, and tactile and multimodal interfaces. Dr. Pappas has served as co-chair of the 2005 SPIE/IS&T Electronic Imaging Symposium, and since 1997 he has been co-chair of the SPIE/IS&T Conference on Human Vision and Electronic Imaging. He has also served as editor-in-chief for the IEEE Transactions on Image Processing from 2010 to 2012. Dr. Pappas is a Fellow of IEEE and SPIE.sheila Hemami received the B.S.E.E. degree from the University of Michigan in 1990, and the M.S.E.E. and Ph.D. degrees from Stanford University in 1992 and 1994, respectively. She was with Hewlett-Packard Laboratories in Palo Alto, California in 1994 and was with the School of Electrical Engineering at Cornell University from 1995-2013. She is currently Professor and Chair of the Department of Electrical & Computer Engineering at Northeastern University in Boston, MA. Dr. Hemami’s research interests broadly concern communication of visual information from the perspectives of both signal processing and psychophysics. She has held various technical leadership positions in the IEEE, served as editor-in-chief for the IEEE Transactions on Multimedia from 2008 to 2010, and was elected a Fellow of the IEEE in 2009 for her for contributions to robust and perceptual image and video communications.

Image Enhancement, Deblurring and Super-Resolution

SC468Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

This course discusses some of the advanced algorithms in the field of digital image processing. In particular, it familiarizes the audience with the understanding, design, and implementation of advanced algorithms used in deblurring, contrast enhancement, sharpening, noise reduction, and super-resolution in still images and video. Some of the applications include medical imaging, entertainment imaging, consumer and professional digital still cameras/camcorders, forensic imaging, and surveillance. Many image examples complement the technical descriptions.

LEARNING OUTCOMESThis course will enable you to:• explain the various nonadaptive and adaptive techniques used

in image contrast enhancement. Examples include PhotoShop commands such as Brightness/Contrast, Auto Levels, Equalize and Shadow/Highlights, or Pizer’s technique and Moroney’s approach

• explain the fundamental techniques used in image Dynamic Range Compression (DRC).Illustrate using the fast bilateral filtering by Dorsey and Durand as an example.

• explain the various techniques used in image noise removal, such as bilateral filtering, sigma filtering and K-Nearest Neighbor

• explain the various techniques used in image sharpening such as nonlinear unsharp masking, etc.

• explain the basic techniques used in image deblurring (restoration) such as inverse filtering and Wiener filtering

• explain the fundamental ideas behind achieving image super-resolution from multiple lower resolution images of the same scene

• explain how motion information can be utilized in image sequences to improve the performance of various enhancement techniques such as noise removal, sharpening, and super-resolution

INTENDED AUDIENCEScientists, engineers, and managers who need to understand and/or apply the techniques employed in digital image processing in various products in a diverse set of applications such as medical imaging, professional and consumer imaging, forensic imaging, etc. Prior knowledge of digital filtering (convolution) is necessary for understanding the (Wiener filtering and inverse filtering) concepts used in deblurring (about 20% of the course content).

INSTRUCTORMajid Rabbani has 30+ years of experience in digital imaging. He is an Eastman Fellow at Kodak and an adjunct faculty at both RIT and University of Rochester. He is the co-recipient of the 2005 and 1988 Kodak Mees Awards and the co-recipient of two Emmy Engineering Awards for his contributions to digital imaging. He is the co-author of the 1991 book “Digital Image Compression Techniques” and the creator of six video/CDROM courses in the area of digital imaging. In 2012 he received the Electronic Imaging Distinguished Educator Award from SPIE and IS&T for 25 years of educational service to the electronic imaging community. He is a Fellow of SPIE, a Fellow of IEEE, and a Kodak Distinguished Inventor.

short Courses

32 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

Understanding and Interpreting Images

SC1015Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd tuesday 1:30 pm to 5:30 pm

A key problem in computer vision is image and video understanding, which can be defined as the task of recognizing objects in the scene and their corresponding relationships and semantics, in addition to identifying the scene category itself. Image understanding technology has numerous applications among which are smart capture devices, intelligent image processing, semantic image search and retrieval, image/video utilization (e.g., ratings on quality, usefulness, etc.), security and surveillance, intelligent asset selection and targeted advertising.This tutorial provides an introduction to the theory and practice of image understanding algorithms by studying the various technologies that serve the three major components of a generalized IU system, namely, feature extraction and selection, machine learning tools used for classification, and datasets and ground truth used for training the classifiers. Following this general development, a few application examples are studied in more detail to gain insight into how these technologies are employed in a practical IU system. Applications include face detection, sky detection, image orientation detection, main subject detection, and content based image retrieval (CBIR). Furthermore, realtime demos including face detection and recognition, CBIR, and automatic zooming and cropping of images based on main-subject detection are provided.

LEARNING OUTCOMESThis course will enable you to:• learn the various applications of IU and the scope of its consumer

and commercial uses• explain the various technologies used in image feature extraction

such as global, block-based or region-based color histograms and moments, the “tiny” image, GIST, histogram of oriented gradients (HOG), scale-invariant feature transform (SIFT), speeded-up robust features (SURF), bag of words, etc.

• explain the various machine learning paradigms and the fundamental techniques used for classification such as Bayesian classifiers, linear support vector machines (SVM) and nonlinear kernels, boosting techniques (e.g., AdaBoost), k-nearest neighbors, .etc.

• explain the concepts used for classifier evaluation such as false positives and negatives, true positives and negatives, confusion matrix, precision and recall, and receiver operating characteristics (ROC)

• explain the basic methods employed in generating and labeling datasets and ground truth and examples of various datasets such as CMU PIE dataset, Label Me dataset, Caltech 256 dataset, TrecVid, FERET dataset, and Pascal Visual Object Recognition

• explain the fundamental ideas employed in the IU algorithms used for face detection, material detection, image orientation, and a few others

• learn the importance of using context in IU tasks

INTENDED AUDIENCEScientists, engineers, and managers who need to familiarize themselves with IU technology and understand its performance limitations in a diverse set of products and applications. No specific prior knowledge is required except familiarity with general mathematical concepts such as the dot product of two vectors and basic image processing concepts such as histograms, filtering, gradients, etc.

INSTRUCTORMajid Rabbani has 30+ years of experience in digital imaging. He is an Eastman Fellow at Kodak and an adjunct faculty at both RIT and University of Rochester. He is the co-recipient of the 2005 and 1988 Kodak Mees Awards and the co-recipient of two Emmy Engineering Awards for his contributions to digital imaging. He is the co-author of the 1991 book “Digital Image Compression Techniques” and the creator of six video/CDROM courses in the area of digital imaging. In 2012 he received the Electronic Imaging Distinguished Educator Award from SPIE and IS&T for 25 years of educational service to the electronic imaging community. He is a Fellow of SPIE, a Fellow of IEEE, and a Kodak Distinguished Inventor.

Perception, Cognition, and Next Generation Imaging

SC969Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 8:30 am to 12:30 pm

The world of electronic imaging is an explosion of hardware and software technologies, used in a variety of applications, in a wide range of domains. These technologies provide visual, auditory and tactile information to human observers, whose job it is to make decisions and solve problems. In this course, we will study fundamentals in human perception and cognition, and see how these principles can guide the design of systems that enhance human performance. We will study examples in display technology, image quality, visualization, image search, visual monitoring and haptics, and students will be encouraged to bring forward ongoing problems of interest to them.

LEARNING OUTCOMESThis course will enable you to:• describe basic principles of spatial, temporal, and color processing

by the human visual system, and know where to go for deeper insight

• explore basic cognitive processes, including visual attention and semantics

• develop skills in applying knowledge about human perception and cognition to engineering applications

INTENDED AUDIENCEScientists, engineers, technicians, or managers who are involved in the design, testing or evaluation of electronic imaging systems. Business managers responsible for innovation and new product development. Anyone interested in human perception and the evolution of electronic imaging applications.

INSTRUCTORBernice Rogowitz founded and co-chairs the SPIE/IS&T Conference on Human Vision and Electronic Imaging (HVEI) which is a multi-disciplinary forum for research on perceptual and cognitive issues in imaging systems. Dr. Rogowitz received her PhD from Columbia University in visual psychophysics, worked as a researcher and research manager at the IBM T.J. Watson Research Center for over 20 years, and is currently a consultant in vision, visual analysis and sensory interfaces. She has published over 60 technical papers and has over 12 patents on perceptually-based approaches to visualization, display technology, semantic image search, color, social networking, surveillance, haptic interfaces. She is a Fellow of the SPIE and the IS&T.

Introduction to Digital Color Imaging New

SC1154Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 8:30 am to 12:30 pm

This short course provides an introduction to color science and digital color imaging systems. Foundational knowledge is introduced first via a overview of the basics of color science and perception, color representation, and the physical mechanisms for displaying and printing colors. Building upon this base, an end-to-end systems view of color imaging is presented that covers color management and color image processing for display, capture, and print. A key objective of the course is to highlight the interactions between the different modules in a color imaging system and to illustrate via examples how co-design has played an important role in the development of current digital color imaging devices and algorithms.

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 33

LEARNING OUTCOMESThis course will enable you to:• explain how color is perceived starting from a physical stimulus

and proceeding through the successive stages of the visual system by using the concepts of tristimulus values, opponent channel representation, and simultaneous contrast

• describe the common representations for color and spatial content in images and their interrelations with the characteristics of the human visual system

• list basic processing functions in a digital color imaging system, and schematically represent a system from input to output for common devices such as a digital cameras, displays, and color printers

• describe why color management is required and how it is performed• explain the role of color appearance transforms in image color

manipulations for gamut mapping and enhancement• explain how interactions between color and spatial dimensions

are commonly utilized in designing color imaging systems and algorithms

• cite examples of algorithms and systems that break traditional cost, performance, and functionality tradeoffs through system-wide optimization

INTENDED AUDIENCEThe short course is intended for engineers, scientists, students, and managers interested in acquiring a broad- system wide view of digital color imaging systems. Prior familiarity with basics of signal and image processing, in particular Fourier representations, is helpful although not essential for an intuitive understanding.

INSTRUCTORgaurav sharma has over two decades of experience in the design and optimization of color imaging systems and algorithms that spans employment at the Xerox Innovation Group and his current position as a Professor at the University of Rochester in the Departments of Electrical and Computer Engineering and Computer Science. Additionally, he has consulted for several companies on the development of new imaging systems and algorithms. He holds 49 issued patents and has authored over a 150 peer-reviewed publications. He is the editor of the “Digital Color Imaging Handbook” published by CRC Press and currently serves as the Editor-in-Chief for the SPIE/IS&T Journal of Electronic Imaging. Dr. Sharma is a fellow of IEEE, SPIE, and IS&T.

Computer Vision

Understanding and Interpreting Images

SC1015Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd tuesday 1:30 pm to 5:30 pm

A key problem in computer vision is image and video understanding, which can be defined as the task of recognizing objects in the scene and their corresponding relationships and semantics, in addition to identifying the scene category itself. Image understanding technology has numerous applications among which are smart capture devices, intelligent image processing, semantic image search and retrieval, image/video utilization (e.g., ratings on quality, usefulness, etc.), security and surveillance, intelligent asset selection and targeted advertising.

This tutorial provides an introduction to the theory and practice of image understanding algorithms by studying the various technologies that serve the three major components of a generalized IU system, namely, feature extraction and selection, machine learning tools used for classification, and datasets and ground truth used for training the classifiers. Following this general development, a few application examples are studied in more detail to gain insight into how these technologies are employed in a practical IU system. Applications include face detection, sky detection, image orientation detection, main subject detection, and content based image retrieval (CBIR). Furthermore, realtime demos including face detection and recognition, CBIR, and automatic zooming and cropping of images based on main-subject detection are provided.

LEARNING OUTCOMESThis course will enable you to:• learn the various applications of IU and the scope of its consumer

and commercial uses• explain the various technologies used in image feature extraction

such as global, block-based or region-based color histograms and moments, the “tiny” image, GIST, histogram of oriented gradients (HOG), scale-invariant feature transform (SIFT), speeded-up robust features (SURF), bag of words, etc.

• explain the various machine learning paradigms and the fundamental techniques used for classification such as Bayesian classifiers, linear support vector machines (SVM) and nonlinear kernels, boosting techniques (e.g., AdaBoost), k-nearest neighbors, .etc.

• explain the concepts used for classifier evaluation such as false positives and negatives, true positives and negatives, confusion matrix, precision and recall, and receiver operating characteristics (ROC)

• explain the basic methods employed in generating and labeling datasets and ground truth and examples of various datasets such as CMU PIE dataset, Label Me dataset, Caltech 256 dataset, TrecVid, FERET dataset, and Pascal Visual Object Recognition

• explain the fundamental ideas employed in the IU algorithms used for face detection, material detection, image orientation, and a few others

• learn the importance of using context in IU tasks

INTENDED AUDIENCEScientists, engineers, and managers who need to familiarize themselves with IU technology and understand its performance limitations in a diverse set of products and applications. No specific prior knowledge is required except familiarity with general mathematical concepts such as the dot product of two vectors and basic image processing concepts such as histograms, filtering, gradients, etc.

INSTRUCTORMajid Rabbani has 30+ years of experience in digital imaging. He is an Eastman Fellow at Kodak and an adjunct faculty at both RIT and University of Rochester. He is the co-recipient of the 2005 and 1988 Kodak Mees Awards and the co-recipient of two Emmy Engineering Awards for his contributions to digital imaging. He is the co-author of the 1991 book “Digital Image Compression Techniques” and the creator of six video/CDROM courses in the area of digital imaging. In 2012 he received the Electronic Imaging Distinguished Educator Award from SPIE and IS&T for 25 years of educational service to the electronic imaging community. He is a Fellow of SPIE, a Fellow of IEEE, and a Kodak Distinguished Inventor.

short Courses

34 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

Camera Characterization and Camera Models New

SC1157Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

Image Quality depends not only on the camera components, but also on lighting, photographer skills, picture content, viewing conditions and to some Image Quality depends not only on the camera components, but also on lighting, photographer skills, picture content, viewing conditions and to some extent on the viewer. While measuring or predicting a camera image quality as perceived by users can be an overwhelming task, many camera attributes can be accurately characterized with objective measurement methodologies.This course provides an insight on camera models, examining the mathematical models of the three main components of a camera (optics, sensor and ISP) and their interactions as a system (camera) or subsystem (camera at the raw level). The course describes methodologies to characterize the camera as a system or subsystem (modeled from the individual component mathematical models), including lab equipment, lighting systems, measurements devices, charts, protocols and software algorithms. Attributes to be discussed include exposure, color response, sharpness, shading, chromatic aberrations, noise, dynamic range, exposure time, rolling shutter, focusing system, and image stabilization. The course will also address aspects that specifically affect video capture, such as video stabilization, video codec, and temporal noise.The course “SC1049 Benchmarking Image Quality of Still and Video Imaging Systems,” describing perceptual models and subjective measurements, complements the treatment of camera models and objective measurements provided here.

LEARNING OUTCOMESThis course will enable you to:• build up and operate a camera characterization lab• master camera characterization protocols• understand camera models• define test plans• compare cameras as a system (end picture), subsystem (raw) or

component level (optics, sensor, ISP)• define data sets for benchmarks

INTENDED AUDIENCEImage scientists, camera designers.

INSTRUCTORJonathan Phillips is a senior image quality scientist in the camera group at NVIDIA. His involvement in the imaging industry spans over 20 years, including two decades at Eastman Kodak Company. His focus has been on photographic quality, with an emphasis on psychophysical testing for both product development and fundamental perceptual studies. His broad experience has included image quality work with capture, display, and print technologies. He received the 2011 I3A Achievement Award for his work on camera phone image quality and headed up the 2012 revision of ISO 20462 - Psychophysical experimental methods for estimating image quality - Part 3: Quality ruler method. He completed his graduate work in color science in the Center for Imaging Science at Rochester Institute of Technology and his chemistry undergraduate at Wheaton College (IL).Harvey (Hervé) Hornung is Camera Characterization Guru at Marvell Semiconductor Inc. His main skill is camera objective characterization and calibration. He worked on a camera array at Pelican Imaging for 2 years and worked at DxO Labs for 8 years as a technical leader in the Image Quality Evaluation business unit, including the most comprehensive objective image quality evaluation product DxO Analyzer and the famous website DxOMark. Harvey has been active in computer graphics and image processing for 20 years and teaches camera characterization and benchmarking at different conferences.

Hugh denman is a video processing and quality specialist at Google, involved in video quality assessment with YouTube and camera quality assessment for Google Chrome devices. Hugh was previously a founding engineer with Green Parrot Pictures, a video algorithms boutiquebased in Ireland and acquired by Google in 2011. While at Google, he has consulted on camera quality assessment with numerous sensor, ISP, and module vendors, and co-ordinates the Google Chrome OS image quality specification.

Benchmarking image Quality of still and Video imaging systems

SC1049Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd Monday 8:30 am to 5:30 pm

Because image quality is multi-faceted, generating a concise and relevant evaluative summary of photographic systems can be challenging. Indeed, benchmarking the image quality of still and video imaging systems requires that the assessor understands not only the capture device itself, but also the imaging applications for the system.This course explains how objective metrics and subjective methodologies are used to benchmark image quality of photographic still image and video capture devices. The course will go through key image quality attributes and the flaws that degrade those attributes, including causes and consequences of the flaws on perceived quality. Content will describe various subjective evaluation methodologies as well as objective measurement methodologies relying on existing standards from ISO, IEEE/CPIQ, ITU and beyond. Because imaging systems are intended for visual purposes, emphasis will be on the value of using objective metrics which are perceptually correlated and generating benchmark data from the combination of objective and subjective metrics.The course “SC1157 Camera Characterization and Camera Models,” describing camera models and objective measurements, complements the treatment of perceptual models and subjective measurements provided here.

LEARNING OUTCOMESThis course will enable you to:• summarize the overall image quality of a camera• identify defects that degrade image quality in natural images and

what component of the camera should/could be improved for better image quality

• evaluate the impact various output use cases have on overall image quality

• define subjective test plans and protocols• compare the image quality of a set of cameras• set up benchmarking protocols depending on use cases• build up a subjective image quality lab

INTENDED AUDIENCEImage scientists, engineers, or managers who wish to learn more about image quality and how to evaluate still and video cameras for various applications. A good understanding of imaging and how a camera works is assumed.

INSTRUCTORJonathan Phillips is a senior image quality scientist in the camera group at NVIDIA. His involvement in the imaging industry spans over 20 years, including two decades at Eastman Kodak Company. His focus has been on photographic quality, with an emphasis on psychophysical testing for both product development and fundamental perceptual studies. His broad experience has included image quality work with capture, display, and print technologies. He received the 2011 I3A Achievement Award for his work on camera phone image quality and headed up the 2012 revision of ISO 20462 - Psychophysical experimental methods for estimating image quality - Part 3: Quality ruler method. He completed his graduate work in color science in the Center for Imaging Science at Rochester Institute of Technology and his chemistry undergraduate at Wheaton College (IL).

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 35

Harvey (Hervé) Hornung is Camera Characterization Guru at Marvell Semiconductor Inc. His main skill is camera objective characterization and calibration. He worked on a camera array at Pelican Imaging for 2 years and worked at DxO Labs for 8 years as a technical leader in the Image Quality Evaluation business unit, including the most comprehensive objective image quality evaluation product DxO Analyzer and the famous website DxOMark. Harvey has been active in computer graphics and image processing for 20 years and teaches camera characterization and benchmarking at different conferences.Hugh denman is a video processing and quality specialist at Google, involved in video quality assessment with YouTube and camera quality assessment for Google Chrome devices. Hugh was previously a founding engineer with Green Parrot Pictures, a video algorithms boutique based in Ireland and acquired by Google in 2011. While at Google, he has consulted on camera quality assessment with numerous sensor, ISP, and module vendors, and co-ordinates the Google Chrome OS image quality specification.

Image Enhancement, Deblurring and Super-Resolution

SC468Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

This course discusses some of the advanced algorithms in the field of digital image processing. In particular, it familiarizes the audience with the understanding, design, and implementation of advanced algorithms used in deblurring, contrast enhancement, sharpening, noise reduction, and super-resolution in still images and video. Some of the applications include medical imaging, entertainment imaging, consumer and professional digital still cameras/camcorders, forensic imaging, and surveillance. Many image examples complement the technical descriptions.

LEARNING OUTCOMESThis course will enable you to:• explain the various nonadaptive and adaptive techniques used

in image contrast enhancement. Examples include PhotoShop commands such as Brightness/Contrast, Auto Levels, Equalize and Shadow/Highlights, or Pizer’s technique and Moroney’s approach

• explain the fundamental techniques used in image Dynamic Range Compression (DRC).Illustrate using the fast bilateral filtering by Dorsey and Durand as an example.

• explain the various techniques used in image noise removal, such as bilateral filtering, sigma filtering and K-Nearest Neighbor

• explain the various techniques used in image sharpening such as nonlinear unsharp masking, etc.

• explain the basic techniques used in image deblurring (restoration) such as inverse filtering and Wiener filtering

• explain the fundamental ideas behind achieving image super-resolution from multiple lower resolution images of the same scene

• explain how motion information can be utilized in image sequences to improve the performance of various enhancement techniques such as noise removal, sharpening, and super-resolution

INTENDED AUDIENCEScientists, engineers, and managers who need to understand and/or apply the techniques employed in digital image processing in various products in a diverse set of applications such as medical imaging, professional and consumer imaging, forensic imaging, etc. Prior knowledge of digital filtering (convolution) is necessary for understanding the (Wiener filtering and inverse filtering) concepts used in deblurring (about 20% of the course content).

INSTRUCTORMajid Rabbani has 30+ years of experience in digital imaging. He is an Eastman Fellow at Kodak and an adjunct faculty at both RIT and University of Rochester. He is the co-recipient of the 2005 and 1988 Kodak Mees Awards and the co-recipient of two Emmy Engineering Awards for his contributions to digital imaging. He is the co-author of the 1991 book “Digital Image Compression Techniques” and the creator of six video/CDROM courses in the area of digital imaging. In 2012 he received the Electronic Imaging Distinguished Educator Award from SPIE and IS&T for 25 years of educational service to the electronic imaging community. He is a Fellow of SPIE, a Fellow of IEEE, and a Kodak Distinguished Inventor.

Perceptual Metrics for Image and Video Quality in a Broader Context: From Perceptual Transparency to Structural Equivalence

SC812Course Level: intermediateCeU: 0.35 $300 Members | $355 Non-Members Usd Wednesday 1:30 pm to 5:30 pm

We will examine objective criteria for the evaluation of image quality that are based on models of visual perception. Our primary emphasis will be on image fidelity, i.e., how close an image is to a given original or reference image, but we will broaden the scope of image fidelity to include structural equivalence. We will also discuss no-reference and limited-reference metrics. We will examine a variety of applications with special emphasis on image and video compression. We will examine near-threshold perceptual metrics, which explicitly account for human visual system (HVS) sensitivity to noise by estimating thresholds above which the distortion is just-noticeable, and supra-threshold metrics, which attempt to quantify visible distortions encountered in high compression applications or when there are losses due to channel conditions. We will also consider metrics for structural equivalence, whereby the original and the distorted image have visible differences but both look natural and are of equally high visual quality. We will also take a close look at procedures for evaluating the performance of quality metrics, including database design, models for generating realistic distortions for various applications, and subjective procedures for metric development and testing. Throughout the course we will discuss both the state of the art and directions for future research.Course topics include:• Applications: Image and video compression, restoration, retrieval,

graphics, etc.• Human visual system review• Near-threshold and supra-threshold perceptual quality metrics• Structural similarity metrics• Perceptual metrics for texture analysis and compression – structural

texture similarity metrics• No-reference and limited-reference metrics• Models for generating realistic distortions for different applications• Design of databases and subjective procedures for metric

development and testing• Metric performance comparisons, selection, and general use and

abuse• Embedded metric performance, e.g., for rate-distortion optimized

compression or restoration• Metrics for specific distortions, e.g., blocking and blurring, and for

specific attributes, e.g., contrast, roughness, and glossiness• Multimodal applications

short Courses

36 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

LEARNING OUTCOMESThis course will enable you to:• gain a basic understanding of the properties of the human visual

system and how current applications (image and video compression, restoration, retrieval, etc.) that attempt to exploit these properties

• gain an operational understanding of existing perceptually-based and structural similarity metrics, the types of images/artifacts on which they work, and their failure modes

• review current distortion models for different applications, and how they can be used to modify or develop new metrics for specific contexts

• differentiate between sub-threshold and supra-threshold artifacts, the HVS responses to these two paradigms, and the differences in measuring that response

• establish criteria by which to select and interpret a particular metric for a particular application.

• evaluate the capabilities and limitations of full-reference, limited-reference, and no-reference metrics, and why each might be used in a particular application

INTENDED AUDIENCEImage and video compression specialists who wish to gain an understanding of how performance can be quantified. Engineers and Scientists who wish to learn about objective image and video quality evaluation.Managers who wish to gain a solid overview of image and video quality evaluation. Students who wish to pursue a career in digital image processing. Intellectual Property and Patent Attorneys who wish to gain a more fundamental understanding of quality metrics and the underlying technologies. Government laboratory personnel who work in imaging.Prerequisites: a basic understanding of image compression algorithms, and a background in digital signal processing and basic statistics: frequency-based representations, filtering, distributions.

INSTRUCTORthrasyvoulos Pappas received the S.B., S.M., and Ph.D. degrees in electrical engineering and computer science from MIT in 1979, 1982, and 1987, respectively. From 1987 until 1999, he was a Member of the Technical Staff at Bell Laboratories, Murray Hill, NJ. He is currently a professor in the Department of Electrical and Computer Engineering at Northwestern University, which he joined in 1999. His research interests are in image and video quality and compression, image and video analysis, content-based retrieval, perceptual models for multimedia processing, model-based halftoning, and tactile and multimodal interfaces. Dr. Pappas has served as co-chair of the 2005 SPIE/IS&T Electronic Imaging Symposium, and since 1997 he has been co-chair of the SPIE/IS&T Conference on Human Vision and Electronic Imaging. He has also served as editor-in-chief for the IEEE Transactions on Image Processing from 2010 to 2012. Dr. Pappas is a Fellow of IEEE and SPIE.sheila Hemami received the B.S.E.E. degree from the University of Michigan in 1990, and the M.S.E.E. and Ph.D. degrees from Stanford University in 1992 and 1994, respectively. She was with Hewlett-Packard Laboratories in Palo Alto, California in 1994 and was with the School of Electrical Engineering at Cornell University from 1995-2013. She is currently Professor and Chair of the Department of Electrical & Computer Engineering at Northeastern University in Boston, MA. Dr. Hemami’s research interests broadly concern communication of visual information from the perspectives of both signal processing and psychophysics. She has held various technical leadership positions in the IEEE, served as editor-in-chief for the IEEE Transactions on Multimedia from 2008 to 2010, and was elected a Fellow of the IEEE in 2009 for her for contributions to robust and perceptual image and video communications.

Digital Camera and Scanner Performance Evaluation: Standards and Measurement

SC807Course Level: intermediateCeU: 0.35 $300 Members | $355 Non-Members Usd tuesday 8:30 am to 12:30 pm

This is an updated course on imaging performance measurement methods for digital image capture devices and systems. We introduce several ISO measurement protocols for camera resolution, tone-transfer, noise, etc. We focus on the underlying sources of variability in system performance, measurement error, and how to manage this variability in working environments. The propagation of measurement variability will be described for several emerging standard methods for; image texture, distortion, color shading, flare and chromatic aberration. Using actual measurements we demonstrate how standards can be adapted to evaluate capture devices ranging from cell phone cameras to scientific detectors. New this year, we will be discussing the use of raw files to investigate intrinsic signal and noise characteristics of the image-capture path.

LEARNING OUTCOMESThis course will enable you to:• appreciate the difference between imaging performance and image

quality• interpret and apply the different flavors of each ISO performance

method• identify sources of system variability, and understand resulting

measurement error• distill information-rich ISO metrics into single measures for quality

assurance• adapt standard methods for use in factory testing• select software elements (with Matlab examples) for performance

evaluation programs• use raw images to investigate intrinsic/limiting imaging perfromance

INTENDED AUDIENCEAlthough technical in content, this course is intended for a wide audience: image scientists, quality engineers, and others evaluating digital camera and scanner performance. No background in imaging performance (MTF, etc.) evaluation will be assumed, although the course will provide previous attendees with an update and further insight for implementation. Detailed knowledge of Matlab is not needed, but exposure to similar software environments will be helpful.

INSTRUCTORPeter Burns is a consultant working in imaging system evaluation, modeling, and image processing. Previously he worked for Carestream Health, Xerox and Eastman Kodak. A frequent speaker at technical conferences, he has contributed to several imaging standards. He has taught several imaging courses: at Kodak, SPIE, and IS&T technical conferences, and at the Center for Imaging Science, RIT.donald Williams , founder of Image Science Associates, was with Kodak Research Laboratories. His work focuses on quantitative signal and noise performance metrics for digital capture imaging devices, and imaging fidelity issues. He co-leads the TC42 standardization efforts on digital print and film scanner resolution (ISO 16067-1, ISO 16067-2) scanner dynamic range (ISO 21550) and is the editor for the second edition to digital camera resolution (ISO 12233).

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 37

Joint Design of Optics and Image Processing for Imaging Systems

SC965Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd Monday 1:30 pm to 5:30 pm

For centuries, optical imaging system design centered on exploiting the laws of the physics of light and materials (glass, plastic, reflective metal, ...) to form high-quality (sharp, high-contrast, undistorted, ...) images that “looked good.” In the past several decades, the optical images produced by such systems have been ever more commonly sensed by digital detectors and the image imperfections corrected in software. The new era of electro-optical imaging offers a more fundamental revision to this paradigm, however: now the optics and image processing can be designed jointly to optimize an end-to-end digital merit function without regard to the traditional quality of the intermediate optical image. Many principles and guidelines from the optics-only era are counterproductive in the new era of electro-optical imaging and must be replaced by principles grounded on both the physics of photons and the information of bits.This short course will describe the theoretical and algorithmic foundations of new methods of jointly designing the optics and image processing of electro-optical imaging systems. The course will focus on the new concepts and approaches rather than commercial tools.

LEARNING OUTCOMESThis course will enable you to:• describe the basics of information theory• characterize electro-optical systems using linear systems theory• compute a predicted mean-squared error merit function• characterize the spatial statistics of sources• implement a Wiener filter• implement spatial convolution and digital filtering• make the distinction between traditional optics-only merit functions

and end-to-end digital merit functions• perform point-spread function engineering• become aware of the image processing implications of various

optical aberrations• describe wavefront coding and cubic phase plates• utilize the power of spherical coding• compare super-resolution algorithms and multi-aperture image

synthesizing systems• simulate the manufacturability of jointly designed imaging systems• evaluate new methods of electro-optical compensation

INTENDED AUDIENCEOptical designers familiar with system characterization (f#, depth of field, numerical aperture, point spread functions, modulation transfer functions, ...) and image processing experts familiar with basic operations (convolution, digital sharpening, information theory, ...).

INSTRUCTORdavid stork is Distinguished Research Scientist and Research Director at Rambus Labs, and a Fellow of the International Association for Pattern Recognition. He holds 40 US patents and has written nearly 200 technical publications including eight books or proceedings volumes such as Seeing the Light, Pattern Classification (2nd ed.) and HAL’s Legacy. He has given over 230 technical presentations on computer image analysis of art in 19 countries.

Perception, Cognition, and Next Generation Imaging

SC969Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 8:30 am to 12:30 pm

The world of electronic imaging is an explosion of hardware and software technologies, used in a variety of applications, in a wide range of domains. These technologies provide visual, auditory and tactile information to human observers, whose job it is to make decisions and solve problems. In this course, we will study fundamentals in human perception and cognition, and see how these principles can guide the design of systems that enhance human performance. We will study examples in display technology, image quality, visualization, image search, visual monitoring and haptics, and students will be encouraged to bring forward ongoing problems of interest to them.

LEARNING OUTCOMESThis course will enable you to:• describe basic principles of spatial, temporal, and color processing

by the human visual system, and know where to go for deeper insight

• explore basic cognitive processes, including visual attention and semantics

• develop skills in applying knowledge about human perception and cognition to engineering applications

INTENDED AUDIENCEScientists, engineers, technicians, or managers who are involved in the design, testing or evaluation of electronic imaging systems. Business managers responsible for innovation and new product development. Anyone interested in human perception and the evolution of electronic imaging applications.

INSTRUCTORBernice Rogowitz founded and co-chairs the SPIE/IS&T Conference on Human Vision and Electronic Imaging (HVEI) which is a multi-disciplinary forum for research on perceptual and cognitive issues in imaging systems. Dr. Rogowitz received her PhD from Columbia University in visual psychophysics, worked as a researcher and research manager at the IBM T.J. Watson Research Center for over 20 years, and is currently a consultant in vision, visual analysis and sensory interfaces. She has published over 60 technical papers and has over 12 patents on perceptually-based approaches to visualization, display technology, semantic image search, color, social networking, surveillance, haptic interfaces. She is a Fellow of the SPIE and the IS&T.

Media Processing and Communication

Recent Trends in Imaging Devices

SC1048Course Level: intermediateCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 1:30 pm to 5:30 pm

In the last decade, consumer imaging devices such as camcorders, digital cameras, smartphones and tablets have been dramatically diffused. The increasing of their computational performances combined with an higher storage capability allowed to design and implement advanced imaging systems that can automatically process visual data with the purpose of understanding the content of the observed scenes.

short Courses

38 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

In the next years, we will be conquered by wearable visual devices acquiring, streaming and logging video of our daily life. This new exciting imaging domain, in which the scene is observed from a first person point of view, poses new challenges to the research community, as well as gives the opportunity to build new applications. Many results in image processing and computer vision related to motion analysis, tracking, scene and object recognition and video summarization, have to be re-defined and re-designed by considering the emerging wearable imaging domain.In the first part of this course we will review the main algorithms involved in the single-sensor imaging devices pipeline describing also some advanced applications. In the second part of the course we will give an overview of the recent trends of imaging devices considering the wearable domain. Challenges and applications will be discussed considering the state-of-the-art literature.

LEARNING OUTCOMESThis course will enable you to:• describe operating single-sensor imaging systems for commercial

and scientific imaging applications• explain how imaging data are acquired and processed (demosaicing,

color calibration, etc.)• list specifications and requirements to select a specific algorithm for

your imaging application• recognize performance differences among imaging pipeline

technologies• become familiar with current and future imaging technologies,

challenges and applications

INTENDED AUDIENCEThis course is intended for those with a general computing background, and is interested in the topic of image processing and computer vision. Students, researchers, and practicing engineers should all be able to benefit from the general overview of the field and the introduction of the most recent advances of the technology.

INSTRUCTORsebastiano Battiato received his Ph.D. in computer science and applied mathematics in 1999, and led the “Imaging” team at STMicroelectronics in Catania through 2003. He joined the Department of Mathematics and Computer Science at the University of Catania as assistant professor in 2004 and became associate professor in 2011. His research interests include image enhancement and processing, image coding, camera imaging technology and multimedia forensics. He has published more than 90 papers in international journals, conference proceedings and book chapters. He is a co-inventor of about 15 international patents, reviewer for several international journals, and has been regularly a member of numerous international conference committees. He is director (and co-founder) of the International Computer Vision Summer School (ICVSS), Sicily, Italy. He is a senior member of the IEEE.giovanni Farinella received the M.S. degree in Computer Science (egregia cum laude) from the University of Catania, Italy, in 2004, and the Ph.D. degree in computer science in 2008. He joined the Image Processing Laboratory (IPLAB) at the Department of Mathematics and Computer Science, University of Catania, in 2008, as a Contract Researcher. He is an Adjunct Professor of Computer Science at the University of Catania (since 2008) and a Contract Professor of Computer Vision at the Academy of Arts of Catania (since 2004). His research interests lie in the fields of computer vision, pattern recognition and machine learning. He has edited four volumes and coauthored more than 60 papers in international journals, conference proceedings and book chapters. He is a co-inventor of four international patents. He serves as a reviewer and on the programme committee for major international journals and international conferences. He founded (in 2006) and currently directs the International Computer Vision Summer School (ICVSS).

Image Quality and Evaluation of Cameras In Mobile Devices

SC1058Course Level: intermediateCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

Digital and mobile imaging camera system performance is determined by a combination of sensor characteristics, lens characteristics, and image-processing algorithms. As pixel size decreases, sensitivity decreases and noise increases, requiring a more sophisticated noise-reduction algorithm to obtain good image quality. Furthermore, small pixels require high-resolution optics with low chromatic aberration and very small blur circles. Ultimately, there is a tradeoff between noise, resolution, sharpness, and the quality of an image.This short course provides an overview of “light in to byte out” issues associated with digital and mobile imaging cameras. The course covers, optics, sensors, image processing, and sources of noise in these cameras, algorithms to reduce it, and different methods of characterization. Although noise is typically measured as a standard deviation in a patch with uniform color, it does not always accurately represent human perception. Based on the “visual noise” algorithm described in ISO 15739, an improved approach for measuring noise as an image quality aspect will be demonstrated. The course shows a way to optimize image quality by balancing the tradeoff between noise and resolution. All methods discussed will use images as examples.

LEARNING OUTCOMESThis course will enable you to:• describe pixel technology and color filtering• describe illumination, photons, sensor and camera radiometry• select a sensor for a given application• describe and measure sensor performance metrics• describe and understand the optics of digital and mobile imaging

systems• examine the difficulties in minimizing sensor sizes• assess the need for per unit calibrations in digital still cameras and

mobile imaging devices• learn about noise, its sources, and methods of managing it• make noise and resolution measurements based on international

standards o EMVA 1288 o ISO 14524 (OECF)/ISO 15739 (Noise) o Visual Noise o ISO 12233 (Resolution)• assess influence of the image pipeline on noise• utilize today’s algorithms to reduce noise in images• measure noise based on human perception• optimize image quality by balancing noise reduction and resolution• compare hardware tradeoffs, noise reduction algorithms, and

settings for optimal image quality

INTENDED AUDIENCEAll people evaluating the image quality of digital cameras, mobile cameras, and scanners would benefit from participation. Technical staff of manufacturers, managers of digital imaging projects, as well as journalists and students studying image technology are among the intended audience.

INSTRUCTORKevin Matherson is a senior image scientist in the research and development lab of Hewlett-Packard’s Imaging and Printing Group and has worked in the field of digital imaging since 1985. He joined Hewlett Packard in 1996 and has participated in the development of all HP digital and mobile imaging cameras produced since that time. His primary research interests focus on noise characterization, optical system analysis, and the optimization of camera image quality. Dr. Matherson currently leads the camera characterization laboratory in Fort Collins and holds Masters and PhD degrees in Optical Sciences from the University of Arizona.

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 39

Uwe artmann studied Photo Technology at the University of Applied Sciences in Cologne following an apprenticeship as a photographer, and finished with the German ‘Diploma Engineer’. He is now CTO at Image Engineering, an independent test lab for imaging devices and manufacturer of all kinds of test equipment for these devices. His special interest is the influence of noise reduction on image quality and MTF measurement in general.

High Dynamic Range Imaging: Sensors and Architectures

SC967Course Level: intermediateCeU: 0.65 $570 Members | $680 Non-Members Usd sunday 8:30 am to 5:30 pm

This course provides attendees with an intermediate knowledge of high dynamic range image sensors and techniques for industrial and non-industrial applications. The course describes various sensor and pixel architectures to achieve high dynamic range imaging as well as software approaches to make high dynamic range images out of lower dynamic range sensors or image sets. The course follows a mathematic approach to define the amount of information that can be extracted from the image for each of the methods described. Some methods for automatic control of exposure and dynamic range of image sensors and other issues like color and glare will be introduced.

LEARNING OUTCOMESThis course will enable you to:• describe various approaches to achieve high dynamic range

imaging• predict the behavior of a given sensor or architecture on a scene• specify the sensor or system requirements for a high dynamic range

application• classify a high dynamic range application into one of several

standard types

INTENDED AUDIENCEThis material is intended for anyone who needs to learn more about quantitative side of high dynamic range imaging. Optical engineers, electronic engineers and scientists will find useful information for their next high dynamic range application.

INSTRUCTORarnaud darmont is owner and CEO of Aphesa, a company founded in 2008 and specialized in custom camera developments, image sensor consulting, the EMVA1288 standard and camera benchmarking. He holds a degree in Electronic Engineering from the University of Liège (Belgium). Prior to founding Aphesa, he worked for over 7 years in the field of CMOS image sensors and high dynamic range imaging.

COURSE PRICE INCLUDES the text High Dynamic Range Imaging: Sen-sors and Architectures (SPIE Press, 2012) by Arnaud Darmont.

HDR Imaging in Cameras, Displays and Human Vision

SC1097Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd Monday 8:30 am to 12:30 pm

High-dynamic range (HDR) imaging is a significant improvement over conventional imaging. After a description of the dynamic range problem in image acquisition, this course focuses on standard methods of creating and manipulating HDR images, replacing myths with measurements of scenes, camera images, and visual appearances. In particular, the course presents measurements about the limits of accurate camera acquisition and the usable range of light for displays of our vision system. Regarding our vision system, the course discusses the role of accurate vs. non-accurate luminance recording for the final appearance of a scene, presenting the quality and the characteristics of visual information actually available on the retina. It ends with a discussion of the principles of tone rendering and the role of spatial comparison.

LEARNING OUTCOMESThis course will enable you to:• explore the history of HDR imaging• describe dynamic range and quantization: the ‘salame’ metaphor• compare single and multiple-exposure for scene capture• measure optical limits in acquisition and visualization• discover relationship between HDR range and scene dependency ;

the effect of glare• explore the limits of our vision system on HDR• calculate retinal luminance• put in relationship the HDR images and the visual appearance• identify tone-rendering problems and spatial methods• verify the changes in color spaces due to dynamic range expansion

INTENDED AUDIENCEColor scientists, software and hardware engineers, photographers, cinematographers, production specialists, and students interested in using HDR images in real applications.

INSTRUCTORalessandro Rizzi has been researching in the field of digital imaging and vision since 1990. His main research topic is the use of color information in digital images with particular attention to color vision mechanisms. He is Associate professor at the Dept. of Computer Science at University of Milano, teaching Fundamentals of Digital Imaging, Multimedia Video, and Human-Computer Interaction. He is one of the founders of the Italian Color Group and member of several program committees of conferences related to color and digital imaging.John McCann received a degree in Biology from Harvard College in 1964. He worked in, and managed, the Vision Research Laboratory at Polaroid from 1961 to 1996. He has studied human color vision, digital image processing, large format instant photography, and the reproduction of fine art. His publications and patents have studied Retinex theory, color constancy, color from rod/cone interactions at low light levels, appearance with scattered light, and HDR imaging. He is a Fellow of the IS&T and the Optical Society of America (OSA). He is a past President of IS&T and the Artists Foundation, Boston. He is the IS&T/OSA 2002 Edwin H. Land Medalist, and IS&T 2005 Honorary Member.

short Courses

40 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

Understanding and Interpreting Images

SC1015Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd tuesday 1:30 pm to 5:30 pm

A key problem in computer vision is image and video understanding, which can be defined as the task of recognizing objects in the scene and their corresponding relationships and semantics, in addition to identifying the scene category itself. Image understanding technology has numerous applications among which are smart capture devices, intelligent image processing, semantic image search and retrieval, image/video utilization (e.g., ratings on quality, usefulness, etc.), security and surveillance, intelligent asset selection and targeted advertising.This tutorial provides an introduction to the theory and practice of image understanding algorithms by studying the various technologies that serve the three major components of a generalized IU system, namely, feature extraction and selection, machine learning tools used for classification, and datasets and ground truth used for training the classifiers. Following this general development, a few application examples are studied in more detail to gain insight into how these technologies are employed in a practical IU system. Applications include face detection, sky detection, image orientation detection, main subject detection, and content based image retrieval (CBIR). Furthermore, realtime demos including face detection and recognition, CBIR, and automatic zooming and cropping of images based on main-subject detection are provided.

LEARNING OUTCOMESThis course will enable you to:• learn the various applications of IU and the scope of its consumer and

commercial uses• explain the various technologies used in image feature extraction

such as global, block-based or region-based color histograms and moments, the “tiny” image, GIST, histogram of oriented gradients (HOG), scale-invariant feature transform (SIFT), speeded-up robust features (SURF), bag of words, etc.

• explain the various machine learning paradigms and the fundamental techniques used for classification such as Bayesian classifiers, linear support vector machines (SVM) and nonlinear kernels, boosting techniques (e.g., AdaBoost), k-nearest neighbors, .etc.

• explain the concepts used for classifier evaluation such as false positives and negatives, true positives and negatives, confusion matrix, precision and recall, and receiver operating characteristics (ROC)

• explain the basic methods employed in generating and labeling datasets and ground truth and examples of various datasets such as CMU PIE dataset, Label Me dataset, Caltech 256 dataset, TrecVid, FERET dataset, and Pascal Visual Object Recognition

• explain the fundamental ideas employed in the IU algorithms used for face detection, material detection, image orientation, and a few others

• learn the importance of using context in IU tasks

INTENDED AUDIENCEScientists, engineers, and managers who need to familiarize themselves with IU technology and understand its performance limitations in a diverse set of products and applications. No specific prior knowledge is required except familiarity with general mathematical concepts such as the dot product of two vectors and basic image processing concepts such as histograms, filtering, gradients, etc.

INSTRUCTORMajid Rabbani has 30+ years of experience in digital imaging. He is an Eastman Fellow at Kodak and an adjunct faculty at both RIT and University of Rochester. He is the co-recipient of the 2005 and 1988 Kodak Mees Awards and the co-recipient of two Emmy Engineering Awards for his contributions to digital imaging. He is the co-author of the 1991 book “Digital Image Compression Techniques” and the creator of six video/CDROM courses in the area of digital imaging. In 2012 he received the Electronic Imaging Distinguished Educator Award from SPIE and IS&T for 25 years of educational service to the electronic imaging community. He is a Fellow of SPIE, a Fellow of IEEE, and a Kodak Distinguished Inventor.

Image Enhancement, Deblurring and Super-Resolution

SC468Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

This course discusses some of the advanced algorithms in the field of digital image processing. In particular, it familiarizes the audience with the understanding, design, and implementation of advanced algorithms used in deblurring, contrast enhancement, sharpening, noise reduction, and super-resolution in still images and video. Some of the applications include medical imaging, entertainment imaging, consumer and professional digital still cameras/camcorders, forensic imaging, and surveillance. Many image examples complement the technical descriptions.

LEARNING OUTCOMESThis course will enable you to:• explain the various nonadaptive and adaptive techniques used

in image contrast enhancement. Examples include PhotoShop commands such as Brightness/Contrast, Auto Levels, Equalize and Shadow/Highlights, or Pizer’s technique and Moroney’s approach

• explain the fundamental techniques used in image Dynamic Range Compression (DRC).Illustrate using the fast bilateral filtering by Dorsey and Durand as an example.

• explain the various techniques used in image noise removal, such as bilateral filtering, sigma filtering and K-Nearest Neighbor

• explain the various techniques used in image sharpening such as nonlinear unsharp masking, etc.

• explain the basic techniques used in image deblurring (restoration) such as inverse filtering and Wiener filtering

• explain the fundamental ideas behind achieving image super-resolution from multiple lower resolution images of the same scene

• explain how motion information can be utilized in image sequences to improve the performance of various enhancement techniques such as noise removal, sharpening, and super-resolution

INTENDED AUDIENCEScientists, engineers, and managers who need to understand and/or apply the techniques employed in digital image processing in various products in a diverse set of applications such as medical imaging, professional and consumer imaging, forensic imaging, etc. Prior knowledge of digital filtering (convolution) is necessary for understanding the (Wiener filtering and inverse filtering) concepts used in deblurring (about 20% of the course content).

INSTRUCTORMajid Rabbani has 30+ years of experience in digital imaging. He is an Eastman Fellow at Kodak and an adjunct faculty at both RIT and University of Rochester. He is the co-recipient of the 2005 and 1988 Kodak Mees Awards and the co-recipient of two Emmy Engineering Awards for his contributions to digital imaging. He is the co-author of the 1991 book “Digital Image Compression Techniques” and the creator of six video/CDROM courses in the area of digital imaging. In 2012 he received the Electronic Imaging Distinguished Educator Award from SPIE and IS&T for 25 years of educational service to the electronic imaging community. He is a Fellow of SPIE, a Fellow of IEEE, and a Kodak Distinguished Inventor.

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 41

Camera Characterization and Camera Models New

SC1157Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

Image Quality depends not only on the camera components, but also on lighting, photographer skills, picture content, viewing conditions and to some Image Quality depends not only on the camera components, but also on lighting, photographer skills, picture content, viewing conditions and to some extent on the viewer. While measuring or predicting a camera image quality as perceived by users can be an overwhelming task, many camera attributes can be accurately characterized with objective measurement methodologies.This course provides an insight on camera models, examining the mathematical models of the three main components of a camera (optics, sensor and ISP) and their interactions as a system (camera) or subsystem (camera at the raw level). The course describes methodologies to characterize the camera as a system or subsystem (modeled from the individual component mathematical models), including lab equipment, lighting systems, measurements devices, charts, protocols and software algorithms. Attributes to be discussed include exposure, color response, sharpness, shading, chromatic aberrations, noise, dynamic range, exposure time, rolling shutter, focusing system, and image stabilization. The course will also address aspects that specifically affect video capture, such as video stabilization, video codec, and temporal noise.The course “SC1049 Benchmarking Image Quality of Still and Video Imaging Systems,” describing perceptual models and subjective measurements, complements the treatment of camera models and objective measurements provided here.

LEARNING OUTCOMESThis course will enable you to:• build up and operate a camera characterization lab• master camera characterization protocols• understand camera models• define test plans• compare cameras as a system (end picture), subsystem (raw) or

component level (optics, sensor, ISP)• define data sets for benchmarks

INTENDED AUDIENCEImage scientists, camera designers.

INSTRUCTORJonathan Phillips is a senior image quality scientist in the camera group at NVIDIA. His involvement in the imaging industry spans over 20 years, including two decades at Eastman Kodak Company. His focus has been on photographic quality, with an emphasis on psychophysical testing for both product development and fundamental perceptual studies. His broad experience has included image quality work with capture, display, and print technologies. He received the 2011 I3A Achievement Award for his work on camera phone image quality and headed up the 2012 revision of ISO 20462 - Psychophysical experimental methods for estimating image quality - Part 3: Quality ruler method. He completed his graduate work in color science in the Center for Imaging Science at Rochester Institute of Technology and his chemistry undergraduate at Wheaton College (IL).Harvey (Hervé) Hornung is Camera Characterization Guru at Marvell Semiconductor Inc. His main skill is camera objective characterization and calibration. He worked on a camera array at Pelican Imaging for 2 years and worked at DxO Labs for 8 years as a technical leader in the Image Quality Evaluation business unit, including the most comprehensive objective image quality evaluation product DxO Analyzer and the famous website DxOMark. Harvey has been active in computer graphics and image processing for 20 years and teaches camera characterization and benchmarking at different conferences.

Hugh denman is a video processing and quality specialist at Google, involved in video quality assessment with YouTube and camera quality assessment for Google Chrome devices. Hugh was previously a founding engineer with Green Parrot Pictures, a video algorithms boutiquebased in Ireland and acquired by Google in 2011. While at Google, he has consulted on camera quality assessment with numerous sensor, ISP, and module vendors, and co-ordinates the Google Chrome OS image quality specification.

Benchmarking image Quality of still and Video imaging systems

SC1049Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd Monday 8:30 am to 5:30 pm

Because image quality is multi-faceted, generating a concise and relevant evaluative summary of photographic systems can be challenging. Indeed, benchmarking the image quality of still and video imaging systems requires that the assessor understands not only the capture device itself, but also the imaging applications for the system.This course explains how objective metrics and subjective methodologies are used to benchmark image quality of photographic still image and video capture devices. The course will go through key image quality attributes and the flaws that degrade those attributes, including causes and consequences of the flaws on perceived quality. Content will describe various subjective evaluation methodologies as well as objective measurement methodologies relying on existing standards from ISO, IEEE/CPIQ, ITU and beyond. Because imaging systems are intended for visual purposes, emphasis will be on the value of using objective metrics which are perceptually correlated and generating benchmark data from the combination of objective and subjective metrics.The course “SC1157 Camera Characterization and Camera Models,” describing camera models and objective measurements, complements the treatment of perceptual models and subjective measurements provided here.

LEARNING OUTCOMESThis course will enable you to:• summarize the overall image quality of a camera• identify defects that degrade image quality in natural images and

what component of the camera should/could be improved for better image quality

• evaluate the impact various output use cases have on overall image quality

• define subjective test plans and protocols• compare the image quality of a set of cameras• set up benchmarking protocols depending on use cases• build up a subjective image quality lab

INTENDED AUDIENCEImage scientists, engineers, or managers who wish to learn more about image quality and how to evaluate still and video cameras for various applications. A good understanding of imaging and how a camera works is assumed.

short Courses

42 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

INSTRUCTORJonathan Phillips is a senior image quality scientist in the camera group at NVIDIA. His involvement in the imaging industry spans over 20 years, including two decades at Eastman Kodak Company. His focus has been on photographic quality, with an emphasis on psychophysical testing for both product development and fundamental perceptual studies. His broad experience has included image quality work with capture, display, and print technologies. He received the 2011 I3A Achievement Award for his work on camera phone image quality and headed up the 2012 revision of ISO 20462 - Psychophysical experimental methods for estimating image quality - Part 3: Quality ruler method. He completed his graduate work in color science in the Center for Imaging Science at Rochester Institute of Technology and his chemistry undergraduate at Wheaton College (IL).Harvey (Hervé) Hornung is Camera Characterization Guru at Marvell Semiconductor Inc. His main skill is camera objective characterization and calibration. He worked on a camera array at Pelican Imaging for 2 years and worked at DxO Labs for 8 years as a technical leader in the Image Quality Evaluation business unit, including the most comprehensive objective image quality evaluation product DxO Analyzer and the famous website DxOMark. Harvey has been active in computer graphics and image processing for 20 years and teaches camera characterization and benchmarking at different conferences.Hugh denman is a video processing and quality specialist at Google, involved in video quality assessment with YouTube and camera quality assessment for Google Chrome devices. Hugh was previously a founding engineer with Green Parrot Pictures, a video algorithms boutique based in Ireland and acquired by Google in 2011. While at Google, he has consulted on camera quality assessment with numerous sensor, ISP, and module vendors, and co-ordinates the Google Chrome OS image quality specification.

Perceptual Metrics for Image and Video Quality in a Broader Context: From Perceptual Transparency to Structural Equivalence

SC812Course Level: intermediateCeU: 0.35 $300 Members | $355 Non-Members Usd Wednesday 1:30 pm to 5:30 pm

We will examine objective criteria for the evaluation of image quality that are based on models of visual perception. Our primary emphasis will be on image fidelity, i.e., how close an image is to a given original or reference image, but we will broaden the scope of image fidelity to include structural equivalence. We will also discuss no-reference and limited-reference metrics. We will examine a variety of applications with special emphasis on image and video compression. We will examine near-threshold perceptual metrics, which explicitly account for human visual system (HVS) sensitivity to noise by estimating thresholds above which the distortion is just-noticeable, and supra-threshold metrics, which attempt to quantify visible distortions encountered in high compression applications or when there are losses due to channel conditions. We will also consider metrics for structural equivalence, whereby the original and the distorted image have visible differences but both look natural and are of equally high visual quality. We will also take a close look at procedures for evaluating the performance of quality metrics, including database design, models for generating realistic distortions for various applications, and subjective procedures for metric development and testing. Throughout the course we will discuss both the state of the art and directions for future research.Course topics include:• Applications: Image and video compression, restoration, retrieval,

graphics, etc.• Human visual system review• Near-threshold and supra-threshold perceptual quality metrics• Structural similarity metrics• Perceptual metrics for texture analysis and compression – structural

texture similarity metrics• No-reference and limited-reference metrics• Models for generating realistic distortions for different applications

• Design of databases and subjective procedures for metric development and testing

• Metric performance comparisons, selection, and general use and abuse

• Embedded metric performance, e.g., for rate-distortion optimized compression or restoration

• Metrics for specific distortions, e.g., blocking and blurring, and for specific attributes, e.g., contrast, roughness, and glossiness

• Multimodal applications

LEARNING OUTCOMESThis course will enable you to:• gain a basic understanding of the properties of the human visual

system and how current applications (image and video compression, restoration, retrieval, etc.) that attempt to exploit these properties

• gain an operational understanding of existing perceptually-based and structural similarity metrics, the types of images/artifacts on which they work, and their failure modes

• review current distortion models for different applications, and how they can be used to modify or develop new metrics for specific contexts

• differentiate between sub-threshold and supra-threshold artifacts, the HVS responses to these two paradigms, and the differences in measuring that response

• establish criteria by which to select and interpret a particular metric for a particular application.

• evaluate the capabilities and limitations of full-reference, limited-reference, and no-reference metrics, and why each might be used in a particular application

INTENDED AUDIENCEImage and video compression specialists who wish to gain an understanding of how performance can be quantified. Engineers and Scientists who wish to learn about objective image and video quality evaluation.Managers who wish to gain a solid overview of image and video quality evaluation. Students who wish to pursue a career in digital image processing. Intellectual Property and Patent Attorneys who wish to gain a more fundamental understanding of quality metrics and the underlying technologies. Government laboratory personnel who work in imaging.Prerequisites: a basic understanding of image compression algorithms, and a background in digital signal processing and basic statistics: frequency-based representations, filtering, distributions.

INSTRUCTORthrasyvoulos Pappas received the S.B., S.M., and Ph.D. degrees in electrical engineering and computer science from MIT in 1979, 1982, and 1987, respectively. From 1987 until 1999, he was a Member of the Technical Staff at Bell Laboratories, Murray Hill, NJ. He is currently a professor in the Department of Electrical and Computer Engineering at Northwestern University, which he joined in 1999. His research interests are in image and video quality and compression, image and video analysis, content-based retrieval, perceptual models for multimedia processing, model-based halftoning, and tactile and multimodal interfaces. Dr. Pappas has served as co-chair of the 2005 SPIE/IS&T Electronic Imaging Symposium, and since 1997 he has been co-chair of the SPIE/IS&T Conference on Human Vision and Electronic Imaging. He has also served as editor-in-chief for the IEEE Transactions on Image Processing from 2010 to 2012. Dr. Pappas is a Fellow of IEEE and SPIE.sheila Hemami received the B.S.E.E. degree from the University of Michigan in 1990, and the M.S.E.E. and Ph.D. degrees from Stanford University in 1992 and 1994, respectively. She was with Hewlett-Packard Laboratories in Palo Alto, California in 1994 and was with the School of Electrical Engineering at Cornell University from 1995-2013. She is currently Professor and Chair of the Department of Electrical & Computer Engineering at Northeastern University in Boston, MA. Dr. Hemami’s research interests broadly concern communication of visual information from the perspectives of both signal processing and psychophysics. She has held various technical leadership positions in the IEEE, served as editor-in-chief for the IEEE Transactions on Multimedia from 2008 to 2010, and was elected a Fellow of the IEEE in 2009 for her for contributions to robust and perceptual image and video communications.

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 43

Perception, Cognition, and Next Generation Imaging

SC969Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 8:30 am to 12:30 pm

The world of electronic imaging is an explosion of hardware and software technologies, used in a variety of applications, in a wide range of domains. These technologies provide visual, auditory and tactile information to human observers, whose job it is to make decisions and solve problems. In this course, we will study fundamentals in human perception and cognition, and see how these principles can guide the design of systems that enhance human performance. We will study examples in display technology, image quality, visualization, image search, visual monitoring and haptics, and students will be encouraged to bring forward ongoing problems of interest to them.

LEARNING OUTCOMESThis course will enable you to:• describe basic principles of spatial, temporal, and color processing

by the human visual system, and know where to go for deeper insight

• explore basic cognitive processes, including visual attention and semantics

• develop skills in applying knowledge about human perception and cognition to engineering applications

INTENDED AUDIENCEScientists, engineers, technicians, or managers who are involved in the design, testing or evaluation of electronic imaging systems. Business managers responsible for innovation and new product development. Anyone interested in human perception and the evolution of electronic imaging applications.

INSTRUCTORBernice Rogowitz founded and co-chairs the SPIE/IS&T Conference on Human Vision and Electronic Imaging (HVEI) which is a multi-disciplinary forum for research on perceptual and cognitive issues in imaging systems. Dr. Rogowitz received her PhD from Columbia University in visual psychophysics, worked as a researcher and research manager at the IBM T.J. Watson Research Center for over 20 years, and is currently a consultant in vision, visual analysis and sensory interfaces. She has published over 60 technical papers and has over 12 patents on perceptually-based approaches to visualization, display technology, semantic image search, color, social networking, surveillance, haptic interfaces. She is a Fellow of the SPIE and the IS&T.

Stereoscopic Display Application Issues

SC060Course Level: intermediateCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

When correctly implemented, stereoscopic 3D displays can provide significant benefits in many areas, including endoscopy and other medical imaging, teleoperated vehicles and telemanipulators, CAD, molecular modeling, 3D computer graphics, 3D visualization, photo interpretation, video-based training, and entertainment. This course conveys a concrete understanding of basic principles and pitfalls that should be considered when setting up stereoscopic systems and producing stereoscopic content. The course will demonstrate a range of stereoscopic hardware and 3D imaging & display principles, outline the key issues in an ortho-stereoscopic video display setup, and show 3D video from a wide variety of applied stereoscopic imaging systems.

LEARNING OUTCOMESThis course will enable you to:• list critical human factors guidelines for stereoscopic display

configuration and implementation • calculate optimal camera focal length, separation, display size, and

viewing distance to achieve a desired level of depth acuity • examine comfort limits for focus/fixation mismatch and on-

screen parallax values as a function of focal length, separation, convergence, display size, and viewing-distance factors

• set up a large-screen stereo display system using AV equipment readily available at most conference sites, for 3D stills and for full-motion 3D video

• rank the often-overlooked side-benefits of stereoscopic displays that should be included in a cost/benefit analysis for proposed 3D applications

• explain common pitfalls in designing tests to compare 2D vs. 3D displays

• calculate and demonstrate the distortions in perceived 3D space due to camera and display parameters

• design and set up an ortho-stereoscopic 3D imaging/display system • understand the projective geometry involved in stereoscopic

modeling • determine the problems, and the solutions, for converting

stereoscopic video across video standards such as NTSC and PAL • work with stereoscopic 3D video and stills -using analog and digital

methods of capture/filming, encoding, storage, format conversion, display, and publishing

• describe the trade-offs among currently available stereoscopic display system technologies and determine which will best match a particular application

• understand existing and developing stereoscopic standards

INTENDED AUDIENCEThis course is designed for engineers, scientists, and program managers who are using, or considering using, stereoscopic 3D displays in their applications. The solid background in stereoscopic system fundamentals, along with many examples of advanced 3D display applications, makes this course highly useful both for those who are new to stereoscopic 3D and also for those who want to advance their current understanding and utilization of stereoscopic systems.

INSTRUCTORJohn Merritt is a 3D display systems consultant at The Merritt Group, Williamsburg, MA, USA with more than 25 years experience in the design and human-factors evaluation of stereoscopic video displays for telepresence and telerobotics, off-road mobility, unmanned vehicles, night vision devices, photo interpretation, scientific visualization, and medical imaging.andrew Woods is a research engineer at Curtin University’s Centre for Marine Science and Technology in Perth, Western Australia. He has over 20 years of experience working on the design, application, and evaluation of stereoscopic technologies for industrial and entertainment applications.

3D Imaging

SC927Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd tuesday 8:30 am to 12:30 pm

The purpose of this course is to introduce algorithms for 3D structure inference from 2D images. In many applications, inferring 3D structure from 2D images can provide crucial sensing information. The course will begin by reviewing geometric image formation and mathematical concepts that are used to describe it, and then move to discuss algorithms for 3D model reconstruction.

short Courses

44 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

The problem of 3D model reconstruction is an inverse problem in which we need to infer 3D information based on incomplete (2D) observations. We will discuss reconstruction algorithms which utilize information from multiple views. Reconstruction requires the knowledge of some intrinsic and extrinsic camera parameters, and the establishment of correspondence between views. We will discuss algorithms for determining camera parameters (camera calibration) and for obtaining correspondence using epipolar constraints between views. The course will also introduce relevant 3D imaging software components available through the industry standard OpenCV library.

LEARNING OUTCOMESThis course will enable you to:• describe fundamental concepts in 3D imaging• develop algorithms for 3D model reconstruction from 2D images• incorporate camera calibration into your reconstructions• classify the limitations of reconstruction techniques• use industry standard tools for developing 3D imaging applications

INTENDED AUDIENCEEngineers, researchers, and software developers, who develop imaging applications and/or use camera sensors for inspection, control, and analysis. The course assumes basic working knowledge concerning matrices and vectors.

INSTRUCTORgady agam is an Associate Professor of Computer Science at the Illinois Institute of Technology. He is the director of the Visual Computing Lab at IIT which focuses on imaging, geometric modeling, and graphics applications. He received his PhD degree from Ben-Gurion University in 1999.

Mobile Imaging

Image Quality and Evaluation of Cameras In Mobile Devices

SC1058Course Level: intermediateCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

Digital and mobile imaging camera system performance is determined by a combination of sensor characteristics, lens characteristics, and image-processing algorithms. As pixel size decreases, sensitivity decreases and noise increases, requiring a more sophisticated noise-reduction algorithm to obtain good image quality. Furthermore, small pixels require high-resolution optics with low chromatic aberration and very small blur circles. Ultimately, there is a tradeoff between noise, resolution, sharpness, and the quality of an image.This short course provides an overview of “light in to byte out” issues associated with digital and mobile imaging cameras. The course covers, optics, sensors, image processing, and sources of noise in these cameras, algorithms to reduce it, and different methods of characterization. Although noise is typically measured as a standard deviation in a patch with uniform color, it does not always accurately represent human perception. Based on the “visual noise” algorithm described in ISO 15739, an improved approach for measuring noise as an image quality aspect will be demonstrated. The course shows a way to optimize image quality by balancing the tradeoff between noise and resolution. All methods discussed will use images as examples.

LEARNING OUTCOMESThis course will enable you to:• describe pixel technology and color filtering• describe illumination, photons, sensor and camera radiometry• select a sensor for a given application• describe and measure sensor performance metrics• describe and understand the optics of digital and mobile imaging

systems• examine the difficulties in minimizing sensor sizes• assess the need for per unit calibrations in digital still cameras and

mobile imaging devices• learn about noise, its sources, and methods of managing it• make noise and resolution measurements based on international

standards o EMVA 1288 o ISO 14524 (OECF)/ISO 15739 (Noise) o Visual Noise o ISO 12233 (Resolution)• assess influence of the image pipeline on noise• utilize today’s algorithms to reduce noise in images• measure noise based on human perception• optimize image quality by balancing noise reduction and resolution• compare hardware tradeoffs, noise reduction algorithms, and

settings for optimal image quality

INTENDED AUDIENCEAll people evaluating the image quality of digital cameras, mobile cameras, and scanners would benefit from participation. Technical staff of manufacturers, managers of digital imaging projects, as well as journalists and students studying image technology are among the intended audience.

INSTRUCTORKevin Matherson is a senior image scientist in the research and development lab of Hewlett-Packard’s Imaging and Printing Group and has worked in the field of digital imaging since 1985. He joined Hewlett Packard in 1996 and has participated in the development of all HP digital and mobile imaging cameras produced since that time. His primary research interests focus on noise characterization, optical system analysis, and the optimization of camera image quality. Dr. Matherson currently leads the camera characterization laboratory in Fort Collins and holds Masters and PhD degrees in Optical Sciences from the University of Arizona.Uwe artmann studied Photo Technology at the University of Applied Sciences in Cologne following an apprenticeship as a photographer, and finished with the German ‘Diploma Engineer’. He is now CTO at Image Engineering, an independent test lab for imaging devices and manufacturer of all kinds of test equipment for these devices. His special interest is the influence of noise reduction on image quality and MTF measurement in general.

HDR Imaging in Cameras, Displays and Human Vision

SC1097Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd Monday 8:30 am to 12:30 pm

High-dynamic range (HDR) imaging is a significant improvement over conventional imaging. After a description of the dynamic range problem in image acquisition, this course focuses on standard methods of creating and manipulating HDR images, replacing myths with measurements of scenes, camera images, and visual appearances. In particular, the course presents measurements about the limits of accurate camera acquisition and the usable range of light for displays of our vision system. Regarding our vision system, the course discusses the role of accurate vs. non-accurate luminance recording for the final appearance of a scene, presenting the quality and the characteristics of visual information actually available on the retina. It ends with a discussion of the principles of tone rendering and the role of spatial comparison.

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 45

LEARNING OUTCOMESThis course will enable you to:• explore the history of HDR imaging• describe dynamic range and quantization: the ‘salame’ metaphor• compare single and multiple-exposure for scene capture• measure optical limits in acquisition and visualization• discover relationship between HDR range and scene dependency ;

the effect of glare• explore the limits of our vision system on HDR• calculate retinal luminance• put in relationship the HDR images and the visual appearance• identify tone-rendering problems and spatial methods• verify the changes in color spaces due to dynamic range expansion

INTENDED AUDIENCEColor scientists, software and hardware engineers, photographers, cinematographers, production specialists, and students interested in using HDR images in real applications.

INSTRUCTORalessandro Rizzi has been researching in the field of digital imaging and vision since 1990. His main research topic is the use of color information in digital images with particular attention to color vision mechanisms. He is Associate professor at the Dept. of Computer Science at University of Milano, teaching Fundamentals of Digital Imaging, Multimedia Video, and Human-Computer Interaction. He is one of the founders of the Italian Color Group and member of several program committees of conferences related to color and digital imaging.John McCann received a degree in Biology from Harvard College in 1964. He worked in, and managed, the Vision Research Laboratory at Polaroid from 1961 to 1996. He has studied human color vision, digital image processing, large format instant photography, and the reproduction of fine art. His publications and patents have studied Retinex theory, color constancy, color from rod/cone interactions at low light levels, appearance with scattered light, and HDR imaging. He is a Fellow of the IS&T and the Optical Society of America (OSA). He is a past President of IS&T and the Artists Foundation, Boston. He is the IS&T/OSA 2002 Edwin H. Land Medalist, and IS&T 2005 Honorary Member.

High Dynamic Range Imaging: Sensors and Architectures

SC967Course Level: intermediateCeU: 0.65 $570 Members | $680 Non-Members Usd sunday 8:30 am to 5:30 pm

This course provides attendees with an intermediate knowledge of high dynamic range image sensors and techniques for industrial and non-industrial applications. The course describes various sensor and pixel architectures to achieve high dynamic range imaging as well as software approaches to make high dynamic range images out of lower dynamic range sensors or image sets. The course follows a mathematic approach to define the amount of information that can be extracted from the image for each of the methods described. Some methods for automatic control of exposure and dynamic range of image sensors and other issues like color and glare will be introduced.

LEARNING OUTCOMESThis course will enable you to:• describe various approaches to achieve high dynamic range imaging• predict the behavior of a given sensor or architecture on a scene• specify the sensor or system requirements for a high dynamic range

application• classify a high dynamic range application into one of several

standard types

INTENDED AUDIENCEThis material is intended for anyone who needs to learn more about quantitative side of high dynamic range imaging. Optical engineers, electronic engineers and scientists will find useful information for their next high dynamic range application.

INSTRUCTORarnaud darmont is owner and CEO of Aphesa, a company founded in 2008 and specialized in custom camera developments, image sensor consulting, the EMVA1288 standard and camera benchmarking. He holds a degree in Electronic Engineering from the University of Liège (Belgium). Prior to founding Aphesa, he worked for over 7 years in the field of CMOS image sensors and high dynamic range imaging.

COURSE PRICE INCLUDES the text High Dynamic Range Imaging: Sen-sors and Architectures (SPIE Press, 2012) by Arnaud Darmont.

Image Enhancement, Deblurring and Super-Resolution

SC468Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

This course discusses some of the advanced algorithms in the field of digital image processing. In particular, it familiarizes the audience with the understanding, design, and implementation of advanced algorithms used in deblurring, contrast enhancement, sharpening, noise reduction, and super-resolution in still images and video. Some of the applications include medical imaging, entertainment imaging, consumer and professional digital still cameras/camcorders, forensic imaging, and surveillance. Many image examples complement the technical descriptions.

LEARNING OUTCOMESThis course will enable you to:• explain the various nonadaptive and adaptive techniques used

in image contrast enhancement. Examples include PhotoShop commands such as Brightness/Contrast, Auto Levels, Equalize and Shadow/Highlights, or Pizer’s technique and Moroney’s approach

• explain the fundamental techniques used in image Dynamic Range Compression (DRC).Illustrate using the fast bilateral filtering by Dorsey and Durand as an example.

• explain the various techniques used in image noise removal, such as bilateral filtering, sigma filtering and K-Nearest Neighbor

• explain the various techniques used in image sharpening such as nonlinear unsharp masking, etc.

• explain the basic techniques used in image deblurring (restoration) such as inverse filtering and Wiener filtering

• explain the fundamental ideas behind achieving image super-resolution from multiple lower resolution images of the same scene

• explain how motion information can be utilized in image sequences to improve the performance of various enhancement techniques such as noise removal, sharpening, and super-resolution

INTENDED AUDIENCEScientists, engineers, and managers who need to understand and/or apply the techniques employed in digital image processing in various products in a diverse set of applications such as medical imaging, professional and consumer imaging, forensic imaging, etc. Prior knowledge of digital filtering (convolution) is necessary for understanding the (Wiener filtering and inverse filtering) concepts used in deblurring (about 20% of the course content).

INSTRUCTORMajid Rabbani has 30+ years of experience in digital imaging. He is an Eastman Fellow at Kodak and an adjunct faculty at both RIT and University of Rochester. He is the co-recipient of the 2005 and 1988 Kodak Mees Awards and the co-recipient of two Emmy Engineering Awards for his contributions to digital imaging. He is the co-author of the 1991 book “Digital Image Compression Techniques” and the creator of six video/CDROM courses in the area of digital imaging. In 2012 he received the Electronic Imaging Distinguished Educator Award from SPIE and IS&T for 25 years of educational service to the electronic imaging community. He is a Fellow of SPIE, a Fellow of IEEE, and a Kodak Distinguished Inventor.

short Courses

46 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

Understanding and Interpreting Images

SC1015Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd tuesday 1:30 pm to 5:30 pm

A key problem in computer vision is image and video understanding, which can be defined as the task of recognizing objects in the scene and their corresponding relationships and semantics, in addition to identifying the scene category itself. Image understanding technology has numerous applications among which are smart capture devices, intelligent image processing, semantic image search and retrieval, image/video utilization (e.g., ratings on quality, usefulness, etc.), security and surveillance, intelligent asset selection and targeted advertising.This tutorial provides an introduction to the theory and practice of image understanding algorithms by studying the various technologies that serve the three major components of a generalized IU system, namely, feature extraction and selection, machine learning tools used for classification, and datasets and ground truth used for training the classifiers. Following this general development, a few application examples are studied in more detail to gain insight into how these technologies are employed in a practical IU system. Applications include face detection, sky detection, image orientation detection, main subject detection, and content based image retrieval (CBIR). Furthermore, realtime demos including face detection and recognition, CBIR, and automatic zooming and cropping of images based on main-subject detection are provided.

LEARNING OUTCOMESThis course will enable you to:• learn the various applications of IU and the scope of its consumer

and commercial uses• explain the various technologies used in image feature extraction

such as global, block-based or region-based color histograms and moments, the “tiny” image, GIST, histogram of oriented gradients (HOG), scale-invariant feature transform (SIFT), speeded-up robust features (SURF), bag of words, etc.

• explain the various machine learning paradigms and the fundamental techniques used for classification such as Bayesian classifiers, linear support vector machines (SVM) and nonlinear kernels, boosting techniques (e.g., AdaBoost), k-nearest neighbors, .etc.

• explain the concepts used for classifier evaluation such as false positives and negatives, true positives and negatives, confusion matrix, precision and recall, and receiver operating characteristics (ROC)

• explain the basic methods employed in generating and labeling datasets and ground truth and examples of various datasets such as CMU PIE dataset, Label Me dataset, Caltech 256 dataset, TrecVid, FERET dataset, and Pascal Visual Object Recognition

• explain the fundamental ideas employed in the IU algorithms used for face detection, material detection, image orientation, and a few others

• learn the importance of using context in IU tasks

INTENDED AUDIENCEScientists, engineers, and managers who need to familiarize themselves with IU technology and understand its performance limitations in a diverse set of products and applications. No specific prior knowledge is required except familiarity with general mathematical concepts such as the dot product of two vectors and basic image processing concepts such as histograms, filtering, gradients, etc.

INSTRUCTORMajid Rabbani has 30+ years of experience in digital imaging. He is an Eastman Fellow at Kodak and an adjunct faculty at both RIT and University of Rochester. He is the co-recipient of the 2005 and 1988 Kodak Mees Awards and the co-recipient of two Emmy Engineering Awards for his contributions to digital imaging. He is the co-author of the 1991 book “Digital Image Compression Techniques” and the creator of six video/CDROM courses in the area of digital imaging. In 2012 he received the Electronic Imaging Distinguished Educator Award from SPIE and IS&T for 25 years of educational service to the electronic imaging community. He is a Fellow of SPIE, a Fellow of IEEE, and a Kodak Distinguished Inventor.

Perceptual Metrics for Image and Video Quality in a Broader Context: From Perceptual Transparency to Structural Equivalence

SC812Course Level: intermediateCeU: 0.35 $300 Members | $355 Non-Members Usd Wednesday 1:30 pm to 5:30 pm

We will examine objective criteria for the evaluation of image quality that are based on models of visual perception. Our primary emphasis will be on image fidelity, i.e., how close an image is to a given original or reference image, but we will broaden the scope of image fidelity to include structural equivalence. We will also discuss no-reference and limited-reference metrics. We will examine a variety of applications with special emphasis on image and video compression. We will examine near-threshold perceptual metrics, which explicitly account for human visual system (HVS) sensitivity to noise by estimating thresholds above which the distortion is just-noticeable, and supra-threshold metrics, which attempt to quantify visible distortions encountered in high compression applications or when there are losses due to channel conditions. We will also consider metrics for structural equivalence, whereby the original and the distorted image have visible differences but both look natural and are of equally high visual quality. We will also take a close look at procedures for evaluating the performance of quality metrics, including database design, models for generating realistic distortions for various applications, and subjective procedures for metric development and testing. Throughout the course we will discuss both the state of the art and directions for future research.Course topics include:• Applications: Image and video compression, restoration, retrieval,

graphics, etc.• Human visual system review• Near-threshold and supra-threshold perceptual quality metrics• Structural similarity metrics• Perceptual metrics for texture analysis and compression – structural

texture similarity metrics• No-reference and limited-reference metrics• Models for generating realistic distortions for different applications• Design of databases and subjective procedures for metric

development and testing• Metric performance comparisons, selection, and general use and

abuse• Embedded metric performance, e.g., for rate-distortion optimized

compression or restoration• Metrics for specific distortions, e.g., blocking and blurring, and for

specific attributes, e.g., contrast, roughness, and glossiness• Multimodal applications

LEARNING OUTCOMESThis course will enable you to:• gain a basic understanding of the properties of the human visual

system and how current applications (image and video compression, restoration, retrieval, etc.) that attempt to exploit these properties

• gain an operational understanding of existing perceptually-based and structural similarity metrics, the types of images/artifacts on which they work, and their failure modes

• review current distortion models for different applications, and how they can be used to modify or develop new metrics for specific contexts

• differentiate between sub-threshold and supra-threshold artifacts, the HVS responses to these two paradigms, and the differences in measuring that response

• establish criteria by which to select and interpret a particular metric for a particular application.

• evaluate the capabilities and limitations of full-reference, limited-reference, and no-reference metrics, and why each might be used in a particular application

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 47

INTENDED AUDIENCEImage and video compression specialists who wish to gain an understanding of how performance can be quantified. Engineers and Scientists who wish to learn about objective image and video quality evaluation.Managers who wish to gain a solid overview of image and video quality evaluation. Students who wish to pursue a career in digital image processing. Intellectual Property and Patent Attorneys who wish to gain a more fundamental understanding of quality metrics and the underlying technologies. Government laboratory personnel who work in imaging.Prerequisites: a basic understanding of image compression algorithms, and a background in digital signal processing and basic statistics: frequency-based representations, filtering, distributions.

INSTRUCTORthrasyvoulos Pappas received the S.B., S.M., and Ph.D. degrees in electrical engineering and computer science from MIT in 1979, 1982, and 1987, respectively. From 1987 until 1999, he was a Member of the Technical Staff at Bell Laboratories, Murray Hill, NJ. He is currently a professor in the Department of Electrical and Computer Engineering at Northwestern University, which he joined in 1999. His research interests are in image and video quality and compression, image and video analysis, content-based retrieval, perceptual models for multimedia processing, model-based halftoning, and tactile and multimodal interfaces. Dr. Pappas has served as co-chair of the 2005 SPIE/IS&T Electronic Imaging Symposium, and since 1997 he has been co-chair of the SPIE/IS&T Conference on Human Vision and Electronic Imaging. He has also served as editor-in-chief for the IEEE Transactions on Image Processing from 2010 to 2012. Dr. Pappas is a Fellow of IEEE and SPIE.sheila Hemami received the B.S.E.E. degree from the University of Michigan in 1990, and the M.S.E.E. and Ph.D. degrees from Stanford University in 1992 and 1994, respectively. She was with Hewlett-Packard Laboratories in Palo Alto, California in 1994 and was with the School of Electrical Engineering at Cornell University from 1995-2013. She is currently Professor and Chair of the Department of Electrical & Computer Engineering at Northeastern University in Boston, MA. Dr. Hemami’s research interests broadly concern communication of visual information from the perspectives of both signal processing and psychophysics. She has held various technical leadership positions in the IEEE, served as editor-in-chief for the IEEE Transactions on Multimedia from 2008 to 2010, and was elected a Fellow of the IEEE in 2009 for her for contributions to robust and perceptual image and video communications.

Camera Characterization and Camera Models New

SC1157Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd sunday 8:30 am to 5:30 pm

Image Quality depends not only on the camera components, but also on lighting, photographer skills, picture content, viewing conditions and to some Image Quality depends not only on the camera components, but also on lighting, photographer skills, picture content, viewing conditions and to some extent on the viewer. While measuring or predicting a camera image quality as perceived by users can be an overwhelming task, many camera attributes can be accurately characterized with objective measurement methodologies.

This course provides an insight on camera models, examining the mathematical models of the three main components of a camera (optics, sensor and ISP) and their interactions as a system (camera) or subsystem (camera at the raw level). The course describes methodologies to characterize the camera as a system or subsystem (modeled from the individual component mathematical models), including lab equipment, lighting systems, measurements devices, charts, protocols and software algorithms. Attributes to be discussed include exposure, color response, sharpness, shading, chromatic aberrations, noise, dynamic range, exposure time, rolling shutter, focusing system, and image stabilization. The course will also address aspects that specifically affect video capture, such as video stabilization, video codec, and temporal noise.The course “SC1049 Benchmarking Image Quality of Still and Video Imaging Systems,” describing perceptual models and subjective measurements, complements the treatment of camera models and objective measurements provided here.

LEARNING OUTCOMESThis course will enable you to:• build up and operate a camera characterization lab• master camera characterization protocols• understand camera models• define test plans• compare cameras as a system (end picture), subsystem (raw) or

component level (optics, sensor, ISP)• define data sets for benchmarks

INTENDED AUDIENCEImage scientists, camera designers.

INSTRUCTORJonathan Phillips is a senior image quality scientist in the camera group at NVIDIA. His involvement in the imaging industry spans over 20 years, including two decades at Eastman Kodak Company. His focus has been on photographic quality, with an emphasis on psychophysical testing for both product development and fundamental perceptual studies. His broad experience has included image quality work with capture, display, and print technologies. He received the 2011 I3A Achievement Award for his work on camera phone image quality and headed up the 2012 revision of ISO 20462 - Psychophysical experimental methods for estimating image quality - Part 3: Quality ruler method. He completed his graduate work in color science in the Center for Imaging Science at Rochester Institute of Technology and his chemistry undergraduate at Wheaton College (IL).Harvey (Hervé) Hornung is Camera Characterization Guru at Marvell Semiconductor Inc. His main skill is camera objective characterization and calibration. He worked on a camera array at Pelican Imaging for 2 years and worked at DxO Labs for 8 years as a technical leader in the Image Quality Evaluation business unit, including the most comprehensive objective image quality evaluation product DxO Analyzer and the famous website DxOMark. Harvey has been active in computer graphics and image processing for 20 years and teaches camera characterization and benchmarking at different conferences.Hugh denman is a video processing and quality specialist at Google, involved in video quality assessment with YouTube and camera quality assessment for Google Chrome devices. Hugh was previously a founding engineer with Green Parrot Pictures, a video algorithms boutiquebased in Ireland and acquired by Google in 2011. While at Google, he has consulted on camera quality assessment with numerous sensor, ISP, and module vendors, and co-ordinates the Google Chrome OS image quality specification.

short Courses

48 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

Benchmarking image Quality of still and Video imaging systems

SC1049Course Level: advancedCeU: 0.65 $525 Members | $635 Non-Members Usd Monday 8:30 am to 5:30 pm

Because image quality is multi-faceted, generating a concise and relevant evaluative summary of photographic systems can be challenging. Indeed, benchmarking the image quality of still and video imaging systems requires that the assessor understands not only the capture device itself, but also the imaging applications for the system.This course explains how objective metrics and subjective methodologies are used to benchmark image quality of photographic still image and video capture devices. The course will go through key image quality attributes and the flaws that degrade those attributes, including causes and consequences of the flaws on perceived quality. Content will describe various subjective evaluation methodologies as well as objective measurement methodologies relying on existing standards from ISO, IEEE/CPIQ, ITU and beyond. Because imaging systems are intended for visual purposes, emphasis will be on the value of using objective metrics which are perceptually correlated and generating benchmark data from the combination of objective and subjective metrics.The course “SC1157 Camera Characterization and Camera Models,” describing camera models and objective measurements, complements the treatment of perceptual models and subjective measurements provided here.

LEARNING OUTCOMESThis course will enable you to:• summarize the overall image quality of a camera• identify defects that degrade image quality in natural images and

what component of the camera should/could be improved for better image quality

• evaluate the impact various output use cases have on overall image quality

• define subjective test plans and protocols• compare the image quality of a set of cameras• set up benchmarking protocols depending on use cases• build up a subjective image quality lab

INTENDED AUDIENCEImage scientists, engineers, or managers who wish to learn more about image quality and how to evaluate still and video cameras for various applications. A good understanding of imaging and how a camera works is assumed.

INSTRUCTORJonathan Phillips is a senior image quality scientist in the camera group at NVIDIA. His involvement in the imaging industry spans over 20 years, including two decades at Eastman Kodak Company. His focus has been on photographic quality, with an emphasis on psychophysical testing for both product development and fundamental perceptual studies. His broad experience has included image quality work with capture, display, and print technologies. He received the 2011 I3A Achievement Award for his work on camera phone image quality and headed up the 2012 revision of ISO 20462 - Psychophysical experimental methods for estimating image quality - Part 3: Quality ruler method. He completed his graduate work in color science in the Center for Imaging Science at Rochester Institute of Technology and his chemistry undergraduate at Wheaton College (IL).Harvey (Hervé) Hornung is Camera Characterization Guru at Marvell Semiconductor Inc. His main skill is camera objective characterization and calibration. He worked on a camera array at Pelican Imaging for 2 years and worked at DxO Labs for 8 years as a technical leader in the Image Quality Evaluation business unit, including the most comprehensive objective image quality evaluation product DxO Analyzer and the famous website DxOMark. Harvey has been active in computer graphics and image processing for 20 years and teaches camera characterization and benchmarking at different conferences.

Hugh denman is a video processing and quality specialist at Google, involved in video quality assessment with YouTube and camera quality assessment for Google Chrome devices. Hugh was previously a founding engineer with Green Parrot Pictures, a video algorithms boutique based in Ireland and acquired by Google in 2011. While at Google, he has consulted on camera quality assessment with numerous sensor, ISP, and module vendors, and co-ordinates the Google Chrome OS image quality specification.

Perception, Cognition, and Next Generation Imaging

SC969Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 8:30 am to 12:30 pm

The world of electronic imaging is an explosion of hardware and software technologies, used in a variety of applications, in a wide range of domains. These technologies provide visual, auditory and tactile information to human observers, whose job it is to make decisions and solve problems. In this course, we will study fundamentals in human perception and cognition, and see how these principles can guide the design of systems that enhance human performance. We will study examples in display technology, image quality, visualization, image search, visual monitoring and haptics, and students will be encouraged to bring forward ongoing problems of interest to them.

LEARNING OUTCOMESThis course will enable you to:• describe basic principles of spatial, temporal, and color processing

by the human visual system, and know where to go for deeper insight

• explore basic cognitive processes, including visual attention and semantics

• develop skills in applying knowledge about human perception and cognition to engineering applications

INTENDED AUDIENCEScientists, engineers, technicians, or managers who are involved in the design, testing or evaluation of electronic imaging systems. Business managers responsible for innovation and new product development. Anyone interested in human perception and the evolution of electronic imaging applications.

INSTRUCTORBernice Rogowitz founded and co-chairs the SPIE/IS&T Conference on Human Vision and Electronic Imaging (HVEI) which is a multi-disciplinary forum for research on perceptual and cognitive issues in imaging systems. Dr. Rogowitz received her PhD from Columbia University in visual psychophysics, worked as a researcher and research manager at the IBM T.J. Watson Research Center for over 20 years, and is currently a consultant in vision, visual analysis and sensory interfaces. She has published over 60 technical papers and has over 12 patents on perceptually-based approaches to visualization, display technology, semantic image search, color, social networking, surveillance, haptic interfaces. She is a Fellow of the SPIE and the IS&T.

short Courses

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 49

Introduction to Digital Color Imaging New

SC1154Course Level: introductoryCeU: 0.35 $300 Members | $355 Non-Members Usd sunday 8:30 am to 12:30 pm

This short course provides an introduction to color science and digital color imaging systems. Foundational knowledge is introduced first via a overview of the basics of color science and perception, color representation, and the physical mechanisms for displaying and printing colors. Building upon this base, an end-to-end systems view of color imaging is presented that covers color management and color image processing for display, capture, and print. A key objective of the course is to highlight the interactions between the different modules in a color imaging system and to illustrate via examples how co-design has played an important role in the development of current digital color imaging devices and algorithms.

LEARNING OUTCOMESThis course will enable you to:• explain how color is perceived starting from a physical stimulus

and proceeding through the successive stages of the visual system by using the concepts of tristimulus values, opponent channel representation, and simultaneous contrast

• describe the common representations for color and spatial content in images and their interrelations with the characteristics of the human visual system

• list basic processing functions in a digital color imaging system, and schematically represent a system from input to output for common devices such as a digital cameras, displays, and color printers

short Courses• describe why color management is required and how it is performed• explain the role of color appearance transforms in image color

manipulations for gamut mapping and enhancement• explain how interactions between color and spatial dimensions

are commonly utilized in designing color imaging systems and algorithms

• cite examples of algorithms and systems that break traditional cost, performance, and functionality tradeoffs through system-wide optimization

INTENDED AUDIENCEThe short course is intended for engineers, scientists, students, and managers interested in acquiring a broad- system wide view of digital color imaging systems. Prior familiarity with basics of signal and image processing, in particular Fourier representations, is helpful although not essential for an intuitive understanding.

INSTRUCTORgaurav sharma has over two decades of experience in the design and optimization of color imaging systems and algorithms that spans employment at the Xerox Innovation Group and his current position as a Professor at the University of Rochester in the Departments of Electrical and Computer Engineering and Computer Science. Additionally, he has consulted for several companies on the development of new imaging systems and algorithms. He holds 49 issued patents and has authored over a 150 peer-reviewed publications. He is the editor of the “Digital Color Imaging Handbook” published by CRC Press and currently serves as the Editor-in-Chief for the SPIE/IS&T Journal of Electronic Imaging. Dr. Sharma is a fellow of IEEE, SPIE, and IS&T.

50 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

Join us in celebrating the International Year of LightThe International Year of Light is a global initiative highlighting to the citizens of the world the importance

of light and light-based technologies in their lives, for their futures, and for the development of society.

We hope that the International Year of Light will increase global awareness of the central role of light in

human activities and that the brightest young minds continue to be attracted to careers in this field.

Light-based technologies respond to the needs of humankind

For more information on how you and your organization can participate, visit www.spie.org/IYL

www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected] 51

About the Symposium Organizers

IS&T, the Society for Imaging Science and Technology, is an international non-profit dedicated to keeping members and others apprised of the latest developments in fields related to imaging science through conferences, educational programs, publications, and its website. IS&T encompasses all aspects of imaging, with particular emphasis on digital printing, electronic imaging, color science, photofinishing, image preservation, silver halide, pre-press technology, and hybrid imaging systems.

IS&T offers members:

• Free, downloadable access to more than 16,000 papers from IS&T conference proceedings via www.imaging.org

• Complimentary online subscriptions to the Journal of Imaging Science & Technology or the Journal of Electronic Imaging

• Reduced rates on IS&T and other publications, including books, conference proceedings, and a second journal subscription.

• Reduced registration fees at all IS&T sponsored or co-sponsored conferences—a value equal to the difference between member and non-member rates alone—as well as on conference short courses

• Access to the IS&T member directory

• Networking opportunities through active participation in chapter activities and conference, program, and other committees

• Subscription to the IS&T The Reporter, a bi-monthly newsletter

• An honors and awards program

Contact IS&T for more information on these and other benefits.

is&t7003 Kilworth LaneSpringfield, VA 22151703/642-9090; 703/642-9094 [email protected]

SPIE is an international society advancing an interdisciplinary approach to the science and application of light. SPIE advances the goals of its Members, and the broader scientific community, in a variety of ways:

• SPIE acts as a catalyst for collaboration among technical disciplines, for information exchange, continuing education, publishing opportunities, patent precedent, and career and professional growth.

• SPIE is the largest organizer and sponsor of international conferences, educational programs, and technical exhibitions on optics, photonics and imaging technologies. SPIE manages 25 to 30 events in North America, Europe, Asia, and the South Pacific annually; over 40,000 researchers, product developers, and industry representatives participate in presenting, publishing, speaking, learning and networking opportunities.

• The Society spends more than $3.2 million annually in scholarships, grants, and financial support. With more than 200 Student Chapters around the world, SPIE is expanding opportunities for students to develop professional skills and utilize career opportunities, supporting the next generation of scientists and engineers.

• SPIE publishes ten scholarly journals and a variety of print media publications. The SPIE Digital Library also publishes the latest research—close to 20,000 proceedings papers each year.

sPie international Headquarters1000 20th St., Bellingham, WA 98225-6705 USATel: +1 360 676 3290 Fax: +1 360 647 1445 [email protected] • www.SPIE.org

52 www.electronicimaging.org • TEL: +1 703 642 9090 • [email protected]

Conferences and Courses8–12 February 2015

Location Hilton San Francisco, Union SquareSan Francisco, California, USA

electronicimaging

2015

Register todaywww.electronicimaging.org

Technologies for digital imaging systems, 3D display, image quality, multimedia, and mobile applications