2003.09.23 - slide 1is 202 – fall 2003 lecture 10: metadata for media prof. ray larson & prof....
Post on 21-Dec-2015
214 Views
Preview:
TRANSCRIPT
2003.09.23 - SLIDE 1IS 202 – FALL 2003
Lecture 10: Metadata for Media
Prof. Ray Larson & Prof. Marc Davis
UC Berkeley SIMS
Tuesday and Thursday 10:30 am - 12:00 pm
Fall 2003http://www.sims.berkeley.edu/academics/courses/is202/f03/
SIMS 202:
Information Organization
and Retrieval
2003.09.23 - SLIDE 2IS 202 – FALL 2003
Today’s Agenda
• Review of Last Time
• Metadata for Motion Pictures– Representing Video
– Current Approaches
– Media Streams
• Discussion Questions
• Action Items for Next Time
2003.09.23 - SLIDE 3IS 202 – FALL 2003
Today’s Agenda
• Review of Last Time
• Metadata for Motion Pictures– Representing Video
– Current Approaches
– Media Streams
• Discussion Questions
• Action Items for Next Time
2003.09.23 - SLIDE 4IS 202 – FALL 2003
The Media Opportunity
• Vastly more media will be produced• Without ways to manage it (metadata
creation and use) we lose the advantages of digital media
• Most current approaches are insufficient and perhaps misguided
• Great opportunity for innovation and invention
• Need interdisciplinary approaches to the problem
2003.09.23 - SLIDE 5IS 202 – FALL 2003
What is the Problem?
• Today people cannot easily find, edit, share, and reuse media
• Computers don’t understand media content– Media is opaque and data rich– We lack structured representations
• Without content representation (metadata), manipulating digital media will remain like word-processing with bitmaps
2003.09.23 - SLIDE 6IS 202 – FALL 2003
M E T A D A T AMETADATA
Traditional Media Production Chain
PRE-PRODUCTION POST-PRODUCTIONPRODUCTION DISTRIBUTION
Metadata-Centric Production Chain
2003.09.23 - SLIDE 7IS 202 – FALL 2003
Asset Retrieval and Reuse
Automated Media Production Process
Web Integration and
Streaming MediaServices
FlashGenerator
WAP
HTML Email
Print/PhysicalMedia
ActiveCapture
1Automatic
Editing3
Personalized/Customized
Delivery
4
Adaptive Media Engine
2 Annotationand Retrieval
Reusable Online Asset Database
Annotation ofMedia Assets
2003.09.23 - SLIDE 8IS 202 – FALL 2003
Technology Summary
• Media Streams provides a framework for creating metadata throughout the media production cycle to make media assets searchable and reusable
• Active Capture automates direction and cinematography using real-time audio-video analysis in an interactive control loop to create reusable media assets
• Adaptive Media uses adaptive media templates and automatic editing functions to mass customize and personalize media and thereby eliminate the need for editing on the part of end users
• Together, these technologies will automate, personalize, and speed up media production, distribution, and reuse
2003.09.23 - SLIDE 12IS 202 – FALL 2003
Evolution of Media Production
• Customized production– Skilled creation of one media product
• Mass production– Automatic replication of one media product
• Mass customization– Skilled creation of adaptive media templates– Automatic production of customized media
2003.09.23 - SLIDE 13IS 202 – FALL 2003
• Movies change from being static data to programs
• Shots are inputs to a program that computes new media based on content representation and functional dependency (US Patents 6,243,087 & 5,969,716)
Central Idea: Movies as Programs
Parser
Parser
Producer
Media
Media
Media
ContentRepresentation
ContentRepresentation
2003.09.23 - SLIDE 14IS 202 – FALL 2003
Today’s Agenda
• Review of Last Time
• Metadata for Motion Pictures– Representing Video
– Current Approaches
– Media Streams
• Discussion Questions
• Action Items for Next Time
2003.09.23 - SLIDE 15IS 202 – FALL 2003
Representing Video
• Streams vs. Clips
• Video syntax and semantics
• Ontological issues in video representation
2003.09.23 - SLIDE 16IS 202 – FALL 2003
Video is Temporal
Stream of 100 Frames of Video
A Clip from Frame 47 to Frame 68 with Descriptors
2003.09.23 - SLIDE 17IS 202 – FALL 2003
Streams vs. Clips
The Stream of 100 Frames of Video with 6 Annotations Resulting in ManyPossible Segmentations of the Stream
Stream of 100 Frames of Video
2003.09.23 - SLIDE 18IS 202 – FALL 2003
Stream-Based Representation
• Makes annotation pay off– The richer the annotation, the more numerous the
possible segmentations of the video stream
• Clips – Change from being fixed segmentations of the video
stream, to being the results of retrieval queries based on annotations of the video stream
• Annotations– Create representations which make clips, not
representations of clips
2003.09.23 - SLIDE 19IS 202 – FALL 2003
Video Syntax and Semantics
• The Kuleshov Effect
• Video has a dual semantics
– Sequence-independent invariant semantics of shots
– Sequence-dependent variable semantics of shots
2003.09.23 - SLIDE 20IS 202 – FALL 2003
Ontological Issues for Video
• Video plays with rules for identity and continuity
– Space
– Time
– Person
– Action
2003.09.23 - SLIDE 21IS 202 – FALL 2003
Space and Time: Actual vs. Inferable
• Actual Recorded Space and Time– GPS– Studio space and time
• Inferable Space and Time– Establishing shots– Cues and clues
2003.09.23 - SLIDE 22IS 202 – FALL 2003
Time: Temporal Durations
• Story (Fabula) Duration– Example: Brushing teeth in story world (5 minutes)
• Plot (Syuzhet) Duration– Example: Brushing teeth in plot world (1 minute: 6
steps of 10 seconds each)
• Screen Duration– Example: Brushing teeth (10 seconds: 2 shots of 5
seconds each)
2003.09.23 - SLIDE 23IS 202 – FALL 2003
Character and Continuity
• Identity of character is constructed through– Continuity of actor– Continuity of role
• Alternative continuities– Continuity of actor only– Continuity of role only
2003.09.23 - SLIDE 24IS 202 – FALL 2003
Representing Action
• Physically-based description for sequence-independent action semantics– Abstract vs. conventionalized descriptions– Temporally and spatially decomposable
actions and subactions
• Issues in describing sequence-dependent action semantics– Mental states (emotions vs. expressions)– Cultural differences (e.g., bowing vs. greeting)
2003.09.23 - SLIDE 25IS 202 – FALL 2003
“Cinematic” Actions
• Cinematic actions support the basic narrative structure of cinema– Reactions/Proactions
• Nodding, screaming, laughing, etc.
– Focus of Attention• Gazing, headturning, pointing, etc.
– Locomotion• Walking, running, etc.
• Cinematic actions can occur• Within the frame/shot boundary• Across the frame boundary• Across shot boundaries
2003.09.23 - SLIDE 26IS 202 – FALL 2003
Today’s Agenda
• Review of Last Time
• Metadata for Motion Pictures– Representing Video
– Current Approaches
– Media Streams
• Discussion Questions
• Action Items for Next Time
2003.09.23 - SLIDE 27IS 202 – FALL 2003
The Search for Solutions
• Current approaches to creating metadata don’t work– Signal-based analysis– Keywords– Natural language
• Need standardized metadata framework– Designed for video and rich media data– Human and machine readable and writable– Standardized and scaleable– Integrated into media capture, archiving, editing,
distribution, and reuse
2003.09.23 - SLIDE 28IS 202 – FALL 2003
Signal-Based Parsing
• Practical problem– Parsing unstructured, unknown video is very,
very hard
• Theoretical problem– Mismatch between percepts and concepts
2003.09.23 - SLIDE 29IS 202 – FALL 2003
Perceptual/Conceptual Issue
Clown Nose Red Sun
Similar Percepts / Dissimilar Concepts
2003.09.23 - SLIDE 30IS 202 – FALL 2003
Perceptual/Conceptual Issue
Car Car
Dissimilar Percepts / Similar Concepts
John Dillinger’s Timothy McVeigh’s
2003.09.23 - SLIDE 31IS 202 – FALL 2003
Signal-Based Parsing
• Effective and useful automatic parsing
– Video• Shot boundary detection• Camera motion analysis• Low level visual similarity• Feature tracking• Face detection
– Audio• Pause detection• Audio pattern matching• Simple speech recognition• Speech vs. music
detection
• Approaches to automated parsing
– At the point of capture, integrate the recording device, the environment, and agents in the environment into an interactive system
– After capture, use “human-in-the-loop” algorithms to leverage human and machine intelligence
2003.09.23 - SLIDE 34IS 202 – FALL 2003
Why Keywords Don’t Work
• Are not a semantic representation
• Do not describe relations between descriptors
• Do not describe temporal structure
• Do not converge
• Do not scale
2003.09.23 - SLIDE 35IS 202 – FALL 2003
Jack, an adult male police officer, while walking to the left, starts waving with his left arm, and then has a puzzled look on his face as he turns his head to the right; he then drops his facial expression and stops turning his head, immediately looks up, and then stops looking up after he stops waving but before he stops walking.
Natural Language vs. Visual Language
2003.09.23 - SLIDE 36IS 202 – FALL 2003
Natural Language vs. Visual Language
Jack, an adult male police officer, while walking to the left, starts waving with his left arm, and then has a puzzled look on his face as he turns his head to the right; he then drops his facial expression and stops turning his head, immediately looks up, and then stops looking up after he stops waving but before he stops walking.
2003.09.23 - SLIDE 38IS 202 – FALL 2003
Visual Language Advantages
• A language designed as an accurate and readable representation of time-based media– For video, especially important for actions,
expressions, and spatial relations
• Enables Gestalt view and quick recognition of descriptors due to designed visual similarities
• Supports global use of annotations
2003.09.23 - SLIDE 39IS 202 – FALL 2003
Today’s Agenda
• Review of Last Time
• Metadata for Motion Pictures– Representing Video
– Current Approaches
– Media Streams
• Discussion Questions
• Action Items for Next Time
2003.09.23 - SLIDE 41IS 202 – FALL 2003
Media Streams Features
• Key features– Stream-based representation (better segmentation)– Semantic indexing (what things are similar to)– Relational indexing (who is doing what to whom)– Temporal indexing (when things happen)– Iconic interface (designed visual language)– Universal annotation (standardized markup schema)
• Key benefits– More accurate annotation and retrieval– Global usability and standardization– Reuse of rich media according to content and structure
2003.09.23 - SLIDE 42IS 202 – FALL 2003
Media Streams GUI Components
• Media Time Line
• Icon Space– Icon Workshop– Icon Palette
2003.09.23 - SLIDE 43IS 202 – FALL 2003
Media Time Line
• Visualize video at multiple time scales
• Write and read multi-layered iconic annotations
• One interface for annotation, query, and composition
2003.09.23 - SLIDE 45IS 202 – FALL 2003
Icon Space
• Icon Workshop– Utilize categories of video representation– Create iconic descriptors by compounding iconic
primitives– Extend set of iconic descriptors
• Icon Palette– Dynamically group related sets of iconic descriptors– Reuse descriptive effort of others– View and use query results
2003.09.23 - SLIDE 47IS 202 – FALL 2003
Icon Space: Icon Workshop
• General to specific (horizontal)– Cascading hierarchy of icons with increasing
specificity on subordinate levels
• Combinatorial (vertical)– Compounding of hierarchically organized
icons across multiple axes of description
2003.09.23 - SLIDE 49IS 202 – FALL 2003
Icon Space: Icon Palette
• Dynamically group related sets of iconic descriptors
• Collect icon sentences
• Reuse descriptive effort of others
2003.09.23 - SLIDE 51IS 202 – FALL 2003
Video Retrieval In Media Streams
• Same interface for annotation and retrieval
• Assembles responses to queries as well as finds them
• Query responses use semantics to degrade gracefully
2003.09.23 - SLIDE 52IS 202 – FALL 2003
Media Streams Technologies
• Minimal video representation distinguishing syntax and semantics
• Iconic visual language for annotating and retrieving video content
• Retrieval-by-composition methods for repurposing video
2003.09.23 - SLIDE 53IS 202 – FALL 2003
Non-Technical Challenges
• Standardization of media metadata (MPEG-7)
• Broadband infrastructure and deployment
• Intellectual property and economic models for sharing and reuse of media assets
2003.09.23 - SLIDE 54IS 202 – FALL 2003
Today’s Agenda
• Review of Last Time
• Metadata for Motion Pictures– Representing Video
– Current Approaches
– Media Streams
• Discussion Questions
• Action Items for Next Time
2003.09.23 - SLIDE 55IS 202 – FALL 2003
Discussion Questions (Davis)
• John Snydal on Media Streams– What is the target audience of users (annotators/retrievers) for
Media Streams? In the article the following groups are mentioned:
• Content providers• Video editors• News teams• Documentary film makers• Film archives• Stock photo houses• Video archivists• Video producers• (international audience)• (illiterate and preliterate people)
– Is it possible that Media Streams could satisfy the needs, goals and requirements of all of these groups, or would it be more appropriate to develop separate, tailored applications for the unique needs of each group?
2003.09.23 - SLIDE 56IS 202 – FALL 2003
Discussion Questions (Davis)
• danah boyd on Media Streams– Icons require visual literacy. Icons are also
culturally constructed. Thus, for them to work as an information access bit, people must learn the visual language; it is not inherent. What are the social consequences of a system dependent on unfamiliar cues?
2003.09.23 - SLIDE 57IS 202 – FALL 2003
Discussion Questions (Davis)
• danah boyd on Media Streams– Films are constructed narratives. But most
commonplace storytelling is not. Even in a creative form, people often piece together found objects instead of finding objects to fit their story. (Think teenage girls making collages out of the latest YM.) Storytelling also happens around media far more than through media (i.e. telling a story about a picture rather than using a collection of pictures to tell a story). My guess is that this social phenomenon goes beyond the retrieval issues. Do you think that Media Streams would encourage new behavior regarding storytelling or will it only be useful for those with a constructed narrative in mind? Why (not)?
2003.09.23 - SLIDE 58IS 202 – FALL 2003
Discussion Questions (Davis)
• Jesse Mendelsohn on Media Streams– Media Streams does not allow iconic
descriptions of emotion or scene-interpretation. How would someone searching stock footage for a “suspenseful scene of two men beating each other” go about doing it? The actual sense of “suspense” and the act of “beating” cannot be iconified. Does this limit Media Streams' ability or is there a way around it within its capabilities as described?
2003.09.23 - SLIDE 59IS 202 – FALL 2003
Discussion Questions (Davis)
• Jesse Mendelsohn on Media Streams– In order for Media Streams to work well it
relies on a the availability of a very large and extensive resource of well-annotated video. Is the current annotation process too primitive and/or time consuming to allow Media Streams to work to its full potential? Will changing how Media Streams can be used to annotate video or changing video annotation methods in general make Media Streams more effective?
2003.09.23 - SLIDE 60IS 202 – FALL 2003
Today’s Agenda
• Review of Last Time
• Metadata for Motion Pictures– Representing Video
– Current Approaches
– Media Streams
• Discussion Questions
• Action Items for Next Time
2003.09.23 - SLIDE 61IS 202 – FALL 2003
Assignment 4.1
• Assignment 4.1
• Phone Metadata Design - Part 1– Due Oct 2
top related