by suren manvelyan, //hays/compvision2017/lectures/19.pdf · sampling strategies 61 k. grauman, b....
TRANSCRIPT
By Suren Manvelyan, http://www.surenmanvelyan.com/gallery/7116
By Suren Manvelyan, http://www.surenmanvelyan.com/gallery/7116
By Suren Manvelyan, http://www.surenmanvelyan.com/gallery/7116
By Suren Manvelyan, http://www.surenmanvelyan.com/gallery/7116
By Suren Manvelyan, http://www.surenmanvelyan.com/gallery/7116
By Suren Manvelyan, http://www.surenmanvelyan.com/gallery/7116
By Suren Manvelyan, http://www.surenmanvelyan.com/gallery/7116
By Suren Manvelyan, http://www.surenmanvelyan.com/gallery/7116
By Suren Manvelyan, http://www.surenmanvelyan.com/gallery/7116
Indy
Indy
Image Categorization
Training Labels
Training
Images
Classifier Training
Training
Image Features
Image Features
Testing
Test Image
Trained Classifier
Trained Classifier Outdoor
Prediction
History of ideas in recognition
• 1960s – early 1990s: the geometric era
• 1990s: appearance-based models
• Mid-1990s: sliding window approaches
• Late 1990s: local features
• Early 2000s: parts-and-shape models
• Mid-2000s: bags of features
Svetlana Lazebnik
Bag-of-features models
Svetlana Lazebnik
ObjectBag of
‘words’
Bag-of-features models
Svetlana Lazebnik
Objects as texture
• All of these are treated as being the same
• No distinction between foreground and background: scene recognition?
Svetlana Lazebnik
Origin 1: Texture recognition
• Texture is characterized by the repetition of basic elements
or textons
• For stochastic textures, it is the identity of the textons, not
their spatial arrangement, that matters
Julesz, 1981; Cula & Dana, 2001; Leung & Malik 2001; Mori, Belongie & Malik, 2001;
Schmid 2001; Varma & Zisserman, 2002, 2003; Lazebnik, Schmid & Ponce, 2003
Origin 1: Texture recognition
Universal texton dictionary
histogram
Julesz, 1981; Cula & Dana, 2001; Leung & Malik 2001; Mori, Belongie & Malik, 2001;
Schmid 2001; Varma & Zisserman, 2002, 2003; Lazebnik, Schmid & Ponce, 2003
Origin 2: Bag-of-words models
• Orderless document representation: frequencies of words
from a dictionary Salton & McGill (1983)
Origin 2: Bag-of-words models
US Presidential Speeches Tag Cloudhttp://chir.ag/phernalia/preztags/
• Orderless document representation: frequencies of words
from a dictionary Salton & McGill (1983)
Origin 2: Bag-of-words models
US Presidential Speeches Tag Cloudhttp://chir.ag/phernalia/preztags/
• Orderless document representation: frequencies of words
from a dictionary Salton & McGill (1983)
Origin 2: Bag-of-words models
US Presidential Speeches Tag Cloudhttp://chir.ag/phernalia/preztags/
• Orderless document representation: frequencies of words
from a dictionary Salton & McGill (1983)
1. Extract features
2. Learn “visual vocabulary”
3. Quantize features using visual vocabulary
4. Represent images by frequencies of “visual words”
Bag-of-features steps
1. Feature extraction
• Regular grid or interest regions
Normalize
patch
Detect patches
Compute
descriptor
Slide credit: Josef Sivic
1. Feature extraction
…
1. Feature extraction
Slide credit: Josef Sivic
2. Learning the visual vocabulary
…
Slide credit: Josef Sivic
2. Learning the visual vocabulary
Clustering
…
Slide credit: Josef Sivic
2. Learning the visual vocabulary
Clustering
…
Slide credit: Josef Sivic
Visual vocabulary
K-means clustering
• Want to minimize sum of squared Euclidean
distances between points xi and their
nearest cluster centers mk
Algorithm:
• Randomly initialize K cluster centers
• Iterate until convergence:• Assign each data point to the nearest center
• Recompute each cluster center as the mean of all points
assigned to it
k
ki
ki mxMXDcluster
clusterinpoint
2)(),(
Clustering and vector quantization
• Clustering is a common method for learning a
visual vocabulary or codebook• Unsupervised learning process
• Each cluster center produced by k-means becomes a
codevector
• Codebook can be learned on separate training set
• Provided the training set is sufficiently representative, the
codebook will be “universal”
• The codebook is used for quantizing features• A vector quantizer takes a feature vector and maps it to the
index of the nearest codevector in a codebook
• Codebook = visual vocabulary
• Codevector = visual word
Example codebook
…
Source: B. Leibe
Appearance codebook
Visual vocabularies: Issues
• How to choose vocabulary size?• Too small: visual words not representative of all patches
• Too large: quantization artifacts, overfitting
• Computational efficiency• Vocabulary trees
(Nister & Stewenius, 2006)
But what about layout?
All of these images have the same color histogram
Spatial pyramid
Compute histogram in each spatial bin
Spatial pyramid representation
• Extension of a bag of features
• Locally orderless representation at several levels of resolution
level 0
Lazebnik, Schmid & Ponce (CVPR 2006)
Spatial pyramid representation
• Extension of a bag of features
• Locally orderless representation at several levels of resolution
level 0 level 1
Lazebnik, Schmid & Ponce (CVPR 2006)
Spatial pyramid representation
level 0 level 1 level 2
• Extension of a bag of features
• Locally orderless representation at several levels of resolution
Lazebnik, Schmid & Ponce (CVPR 2006)
Scene category dataset
Multi-class classification results
(100 training images per class)
Caltech101 datasethttp://www.vision.caltech.edu/Image_Datasets/Caltech101/Caltech101.html
Multi-class classification results (30 training images per class)
Bags of features for action recognition
Juan Carlos Niebles, Hongcheng Wang and Li Fei-Fei, Unsupervised Learning of Human
Action Categories Using Spatial-Temporal Words, IJCV 2008.
Space-time interest points
History of ideas in recognition
• 1960s – early 1990s: the geometric era
• 1990s: appearance-based models
• Mid-1990s: sliding window approaches
• Late 1990s: local features
• Early 2000s: parts-and-shape models
• Mid-2000s: bags of features
• Present trends: combination of local and global
methods, context, deep learning
Svetlana Lazebnik
Large-scale Instance Retrieval
Computer Vision
James Hays
Many slides from Derek Hoiem and Kristen Grauman
Multi-view matching
vs
…
?
Matching two given
views for depth
Search for a matching
view for recognition
Kristen Grauman
Perc
eptu
al
and S
enso
ry A
ugm
ente
d C
om
puti
ng
Vis
ual
Ob
ject
Reco
gn
itio
n T
uto
rial
Video Google System
1. Collect all words within
query region
2. Inverted file index to find
relevant frames
3. Compare word counts
4. Spatial verification
Sivic & Zisserman, ICCV 2003
• Demo online at : http://www.robots.ox.ac.uk/~vgg/r
esearch/vgoogle/index.html
Query
region
Retrie
ved fra
mes
Kristen Grauman
Perc
eptu
al
and S
enso
ry A
ugm
ente
d C
om
puti
ng
Vis
ual
Ob
ject
Reco
gn
itio
n T
uto
rial
Vis
ual
Ob
ject
Reco
gn
itio
n T
uto
rial
Application: Large-Scale Retrieval
[Philbin CVPR’07]
Query Results from 5k Flickr images (demo available for 100k set)
Simple idea
See how many keypoints are close to keypoints in each other image
Lots of
Matches
Few or No
Matches
But this will be really, really slow!
Indexing local features
• Each patch / region has a descriptor, which is a
point in some high-dimensional feature space
(e.g., SIFT)
Descriptor’s
feature space
Kristen Grauman
Indexing local features
• When we see close points in feature space, we
have similar descriptors, which indicates similar
local content.
Descriptor’s
feature space
Database
images
Query
image
Easily can have millions of
features to search! Kristen Grauman
Indexing local features:
inverted file index• For text
documents, an
efficient way to find
all pages on which
a word occurs is to
use an index…
• We want to find all
images in which a
feature occurs.
• To use this idea,
we’ll need to map
our features to
“visual words”.Kristen Grauman
Visual words
• Map high-dimensional descriptors to tokens/words
by quantizing the feature space
Descriptor’s
feature space
• Quantize via
clustering, let
cluster centers be
the prototype
“words”
• Determine which
word to assign to
each new image
region by finding
the closest cluster
center.
Word #2
Kristen Grauman
Visual words
• Example: each
group of patches
belongs to the
same visual word
Figure from Sivic & Zisserman, ICCV 2003 Kristen Grauman
Visual vocabulary formation
Issues:
• Vocabulary size, number of words
• Sampling strategy: where to extract features?
• Clustering / quantization algorithm
• Unsupervised vs. supervised
• What corpus provides features (universal vocabulary?)
Kristen Grauman
Sampling strategies
61 K. Grauman, B. LeibeImage credits: F-F. Li, E. Nowak, J. Sivic
Dense, uniformly Sparse, at interest
pointsRandomly
Multiple interest
operators
• To find specific, textured objects, sparse
sampling from interest points often more reliable.
• Multiple complementary interest operators offer
more image coverage.
• For object categorization, dense sampling offers
better coverage.
[See Nowak, Jurie & Triggs, ECCV 2006]
Inverted file index
• Database images are loaded into the index mapping
words to image numbersKristen Grauman
• New query image is mapped to indices of database
images that share a word.
Inverted file index
Kristen Grauman
Inverted file index
• Key requirement for inverted file index to
be efficient: sparsity
• If most pages/images contain most words
then you’re no better off than exhaustive
search.
– Exhaustive search would mean comparing the
word distribution of a query versus every
page.
Instance recognition:
remaining issues
• How to summarize the content of an entire
image? And gauge overall similarity?
• How large should the vocabulary be? How to
perform quantization efficiently?
• Is having the same set of visual words enough to
identify the object/scene? How to verify spatial
agreement?
• How to score the retrieval results?
Kristen Grauman