committee update building a visual hierarchy andrew smith 30 july 2008
Post on 22-Dec-2015
214 views
TRANSCRIPT
Outline Confabulation theory
Summary Comparisons to other AI techniques
Human Visual System
Building A Visual Hierarchy Learning Inference
Texture modeling (applications)
Future work (dissertation defence, Spring 2009)
Confabulation Theory
• A theory of the mechanism of thought– Cortex/thalamus is divided
into thousands of modules (1,000,000s of neurons).
– Each module contains a lexicon of symbols.
– Symbols are sparse populations (100s) of neurons within a module.
– Symbols are stable states of a cortex-thalamus attractor circuit.
Confabulation theory (1/4)
Key concept 1:
Modules contain symbols, the atoms of our mental universe.
Smell module: Apple, flower, rotten, …Word module: ‘rose’ ‘the’ ‘and’ ‘it’ ‘France’ ‘Joe’ …Abstract planning modules, etc.
Modules are small patches of thalamocortical neurons.Each symbol is a sparse popuation of those neurons.
Confabulation theory (2/4)
Key concept 2:
All cognitive knowledge is knowledge links between these symbols.
Smell module: Apple, flower, rotten, …Word module: ‘the’ ‘and’ ‘it’ ‘France’ ‘Joe’ ‘apple’ …
Only symbols that are meaningfully co-occurring may become linked.
Confabulation theory (3/4)
Key concept 3:
A confabulation operation is the universal computational mechanism.
Given evidence a, b, c pick answer x such that:
x = argmaxx’ p(a, b, c | x’)
We say x has maximum cogency.
Confabulation theory (3/4) Fundamental Theorem of Cognition:[1]
p()4 = p()/p() ∙p()/p()∙p()/p() ∙p()/p() ∙p()p()p(g|)p()
If the first four terms remain nearly constant w.r.t , maximizing the fifth term maximizes cogency (the conditional joint).
Confabulation theory (4/4)
Key concept 4:
Each confabulation operation launches a control signal to other modules.
Control mechanism of inference – studied by others in the lab.
(not here)
Similarities to other AI / ML
Bayesian networks – a special case A “confabulation network” is similar to a Bayesian Net with:
Symbolic variables (discrete & finite & exclusive state) with equal priors.
Naïve-Bayes assumption for CP tables. Can use similar learning algorithms (counting for CPs)
Hinton’s (unrestricted) Bolzman Machines – generalized: Do not require complete connectivity (many) more than two states. Can use stochastic (Monte Carlo) ‘execution’
Outline Confabulation theory
Summary Comparisons to other AI techniques
Human Visual System
A Visual Hierarchy Learning Inference
Texture modeling
Future Work (i.e. my thesis)
Human Visual System
1) Retina – “pixels”2) Lateral Geniculate Nucleus (LGN)
“center-surround” representation
3) Primary(…) Visual cortex (V1 …)• Simple cells:
• Hubel Weisel (1959)• Modeled by Dennis Gabor features[]
• Complex cells• more complicated (end-stops, bars, ???)
Take inspiration for our first and second-level features
Outline Confabulation theory
Summary Comparisons to other AI techniques
Human Visual System
Building A Visual Hierarchy Learning Inference
Texture modeling
Future Work (i.e. my thesis)
Confabulation & vision
Features (symbols) develop in a layer of the hierarchy as commonly seen inputs from their inputs.
Knowledge links are simple conditional probabilities: p(|) where and are symbols in connected modules) All knowledge can therefore be learned by simple co-
occurrence counting. p(|) = C(,) / C()
Building a vision hierarchy
• Can no longer use SSE to evaluate model
• Instead, make use of generative model:– Always be able to generate a plausible image.
Vision Hierarch – level “0”
We know the first transformation from neuroscience research: simple cells approximate Gabor filters. 5 scales, 16 orientations (odd + even)
Vision Hierarch – level “0”
• Does the full convolution preserve information in images? (inverted by LS)
• Very closely.
Vision Hierarchy – level 1
• We now have a simple-cell like representation.• How to create a symbolic representation?
• Apply principle: Collect common sets of inputs from simple cells: similar to a Vector Quantizer.
• Keep the 5-scales separate – (quantize 16-dimensions, not 80)
Vision Hierarchy – level 1
• To create actual symbols, we use a vector quantizer– Trade-offs (threshold of quantizer) :
Number of symbols Preservation of information
Probability accuracy
• Solution Use angular distance metric (dot-product)– Keep only symbols that occurred in training set more than 200
times, to get accurate p().
– After training, ~95% of samples should be within threshold of at least one symbol.
– Pick a threshold so images can be plausibly generated.
Vision Hierarchy – level 1
Oops!
Ignoring wavelet
magnitude makes all
“texture features”
equally prominent.
Vision Hierarchy – level 1• Solution, use binning (into 5 magnitudes), then
apply vector quantizers).
Vision Hierarchy – level 1• ~10,000 symbols
are learned for each of the 5 scales.
• Complex features develop.
Outline Confabulation theory
Summary Comparisons to other AI techniques
Human Visual System
Building A Visual Hierarchy Learning Inference
Texture modeling
Future Work (i.e. my thesis)
Texture modeling - Learning
We can now represent an image as five superimposed grids of symbols. Transform data set Learn which symbols are typically next to which. (knowledge links)
Knowledge links:
• Learn which symbols may be next to which symbols (conditional probabilities)
• Learn which symbols may be over/under which symbols.
• Go out to ‘radius’ 5.
Texture modeling – Inference 1
What if a portion of our image symbol representation is damaged? Blind spot CCD defect brain lesion
We can use confabulation (generation) to infer a plausible replacement.
Texture modeling – Inference 1
• Fill in missing region by confabulating from lateral & different scale neighbors (rad 5).
Texture modeling
Conclusions
This visual hierarchy does an excellent job at capturing an image up to a certain order of complexity.
Given this visual hierarchy and its learned knowledge links, missing regions could plausibly filled in. This could be a reasonable explanation for what animals do.
Texture modeling – Inference 2
• Super-resolution:– If we have a low
resolution image, can we confabulate (generate) a high-resolution version?
– “Space out” the symbols, and confabulate values for the new neighbors
Texture modeling
Super-resolution: conclusions
Having learned the statistics of natural images, the generative properties of this hierarchy can confabulate (generate) plausible high-resolution versions of its input.
Outline Confabulation theory
Summary Comparisons to other AI techniques
Human Visual System
Building A Visual Hierarchy Learning Inference
Texture modeling
Future Work (Dissertation)