wp objective testing

Upload: aliialavii

Post on 03-Apr-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/29/2019 Wp Objective Testing

    1/3

    1566 La Pradera Dr

    Version 2.0 A Video Clarity White Paper page 1 of 3

    Campbell, CA 95008www.videoclarity.com

    408-379-6952

    White Paper: How to Do Objective Video Testing

    Bill Reckwerdt, CTOVideo Clarity, Inc.

  • 7/29/2019 Wp Objective Testing

    2/3

    Version 2.0 A Video Clarity White Paper page 2 of 3

    Over recent decades, the role of video imageshas grown steadily. Advances in technologiesunderlying the capture, transfer, storage, anddisplay of images have created situations

    where communicating using images hasbecome economically feasible. Moreimportantly, video images are in manysituations an extremely efficient way ofcommunicating as witnessed by the proverb apicture is worth a 1000 words.

    Notwithstanding these technological advances,the current state of the art requires manycompromises. Examples of thesecompromises are temporal resolution versusnoise, spatial resolution versus image size,and luminance/color range versus gamut.

    These choices affect the video quality of thereproduced images. To make optimal choices,it is necessary to have knowledge about howparticular choices affect the impression of theviewer. This is the central question of all videoquality research.

    Current video quality research can be dividedinto 2 approaches: experimental evaluationand modeling.

    Experimental Evaluation

    A group of human subjects is invited to judge

    the quality of video sequences under definedconditions. Several recommendations arefound in ITU-R BT.500.10 Methodology forSubjective Assessment of the quality of

    Television Pictures and ITU-T P.9210Subjective Video Quality Assessmentmethods for Multimedia Applications. Themethods are summarized here.

    The main subjective quality methods areDegradation Category Rating (DCR), PairComparison (PC) and Absolute CategoryRating (ACR). The human subjects are shown

    2 sequences (original and processed) and areasked to assess the overall quality of theprocessed sequence with respect to theoriginal (reference) sequence. The test isdivided into multiple sessions and eachsession should not last more than 30 minutes.For every session, several dummy sequencesare added, which are used to train the humansubjects and are not included in the finalscore. The subjects score the processed video

    sequence on a scale (usually 5 or 9)corresponding to their mental measure of thequality this is termed Mean Observer Score(MOS).

    Two serious drawbacks of this approach are:1. It is extremely time consuming, and

    tiresome for the participants.2. The obtained knowledge cannot be

    generalized because relationshipsbetween design choices and video qualityare descriptive rather than based onunderstanding.

    As a result, in a single series of experimentsonly a small fraction of the possible designdecisions can be investigated. This makes theprocess even longer and more tedious.

    Modeling

    The second approach tries to address thesedrawbacks by means of developing modelsthat describe the influences of several physicalimage characteristics on video quality, usuallythrough a set of video attributes thought todetermine video quality. When the influence ofa set of design choices on physical videocharacteristics is known, then models canpredict video quality. The models expressvideo quality in terms of visible distortions, or

    artifacts introduced during the design process.Examples of typical distortions includeflickering, blockiness, noisiness, or color shifts.

    Two types of models exist, where thefundamental difference between them is howthe impairment is calculated.

    In the first type, physiologically orpsychophysically models of early visualprocessing are used to calculate impairmentfrom a difference between the videosequences. Many well known metrics exist,which compare the original to theprocessed output:

    PSNR Peak Signal to Noise Ratio

    J ND J ust noticeable differences

    SSIM Structural SIMilarity

    VQM Video Quality Metric

    MPQM - Moving Picture Quality Metric

    NVFM - Normalize Video Fidelity MetricThe two most important drawbacks of thisapproach are

    http://www.videoclarity.com/press.htmlhttp://www.videoclarity.com/press.html
  • 7/29/2019 Wp Objective Testing

    3/3

    Version 2.0 A Video Clarity White Paper page 3 of 3

    1. It is unclear what exactly the originalversion of a video is.

    2. These algorithms are measuring visibledifferences not video quality.

    The second type of model tries to estimatevisible distortions directly from the processedvideo; instead of comparing it to the original.In this type of model, visible distortions of avideo, such as unsharpness or noisiness arepredicted by estimating physical attributes ofthe video. The advantage of this approach isthat the original video sequence is notneeded. The uncertain translation from visibledistortions to video quality is an importantdrawback to this approach.

    The Author

    Bill Reckwerdt has been involved in digitalvideo since the early 90s from digitalcompression, video on demand, to streaming

    servers. He received his MS specializing inBehavioral Modeling and Design Automationfrom the University of Illinois Urbana-Champaign.

    He is currently the VP of Marketing and theCTO for Video Clarity, which makesquantitative, repeatable video quality testingtools. For more information about VideoClarity, please visit their website athttp://www.videoclarity.com.

    http://www.videoclarity.com/http://www.videoclarity.com/