segmentationsegmentation c. phillips, institut montefiore, ulg, 2006

40
Segmentation Segmentation C. Phillips, Institut Montefiore, ULg, 20 C. Phillips, Institut Montefiore, ULg, 20

Upload: elijah-nelson

Post on 11-Jan-2016

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

SegmentationSegmentationSegmentationSegmentation

C. Phillips, Institut Montefiore, ULg, 2006C. Phillips, Institut Montefiore, ULg, 2006

Page 2: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

In image analysis, segmentation is the partition of a digital image into multiple regions (sets of pixels), according to some criterion.

The goal of segmentation is typically to locate certain objects of interest which may be depicted in the image.

Segmentation criteria can be arbitrarily complex, and take into account global as well as local criteria. A common requirement is that each region must be connected in some sense.

DefinitionDefinitionDefinitionDefinition

Page 3: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

A simple example of segmentation is thresholding a grayscale image with a fixed threshold t: each pixel p is assigned to one of two classes, P0 or P1,

depending on whether I(p) < t or I(p) ≥ t.

t=.5

Page 4: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Example: medical imaging...

Page 5: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

How to fix the threshold ?

Page 6: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Goal of brain image segmentation

Split the head volume into its « main » components:

• gray matter (GM)• white matter (WM)• cerebrol-spinal fluid (CSF)• the rest/others

• (tumour)

Page 7: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Manual segmentation:an operator classifies the voxels manually

Segmentation approachesSegmentation approachesSegmentation approachesSegmentation approaches

Page 8: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Semi-automatic segmentation:an operator defines a set of parameters, that are passed to an algorithm

Example: threshold at

t=200

Segmentation approachesSegmentation approachesSegmentation approachesSegmentation approaches

Page 9: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Segmentation approachesSegmentation approachesSegmentation approachesSegmentation approaches

Automatic segmentation:no operator intervention

objective and reproducible

Page 10: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Model the histogram of the image !

Intensity based segmentationIntensity based segmentationIntensity based segmentationIntensity based segmentation

Page 11: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Segmentation - Segmentation - Mixture ModelMixture ModelSegmentation - Segmentation - Mixture ModelMixture Model

• Intensities are modelled by a mixture of K Intensities are modelled by a mixture of K Gaussian distributions, parameterised by:Gaussian distributions, parameterised by:– meansmeans– variancesvariances– mixing proportionsmixing proportions

1

23

12 3

ii

i

Page 12: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Segmentation - Segmentation - AlgorithmAlgorithmSegmentation - Segmentation - AlgorithmAlgorithm

Starting estimates for belonging probabilities

Compute Gaussian parameters from

belonging probabilities

Compute belonging probabilities from

Gaussian parameters

Converged ?No Yes STOP

1

23

12 3

Page 13: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Segmentation - Segmentation - ProblemsProblemsSegmentation - Segmentation - ProblemsProblems

Noise & Partial volume effect

Page 14: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

MR images are corrupted by a smooth MR images are corrupted by a smooth intensity non-uniformity (bias).intensity non-uniformity (bias).

Image with bias

artefact

Corrected image

Segmentation - Segmentation - ProblemsProblemsSegmentation - Segmentation - ProblemsProblems

Intensity bias field

Page 15: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Segmentation - Segmentation - PriorsPriorsSegmentation - Segmentation - PriorsPriors

Overlay prior belonging probability maps to Overlay prior belonging probability maps to assist the segmentationassist the segmentation– Prior probability of each voxel being of a Prior probability of each voxel being of a

particular type is derived from segmented particular type is derived from segmented images of 151images of 151subjectssubjects• Assumed to be Assumed to be

representativerepresentative

– Requires initialRequires initialregistration toregistration tostandard space.standard space.

Page 16: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

• Bias correction informs segmentation

• Registration informs segmentation

• Segmentation informs bias correction

• Bias correction informs registration

• Segmentation informs registration

Unified approach:Unified approach:segmentation-correction-registrationsegmentation-correction-registration

Unified approach:Unified approach:segmentation-correction-registrationsegmentation-correction-registration

Page 17: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Unified SegmentationUnified Segmentation

• The solution to this circularity is to put The solution to this circularity is to put everything in the same everything in the same Generative ModelGenerative Model..– A MAP solution is found by repeatedly A MAP solution is found by repeatedly

alternating among classification, bias alternating among classification, bias correction and registration steps.correction and registration steps.

• The Generative Model involves:The Generative Model involves:– Mixture of Gaussians (MOG)Mixture of Gaussians (MOG)– Bias Correction ComponentBias Correction Component– Warping (Non-linear Registration) ComponentWarping (Non-linear Registration) Component

Page 18: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Gaussian Probability DensityGaussian Probability DensityGaussian Probability DensityGaussian Probability Density

• If intensities are assumed to be If intensities are assumed to be Gaussian of meanGaussian of mean kk and varianceand variance 22

kk,, then the probability of a valuethen the probability of a value yyii is:is:

Page 19: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Non-Gaussian Probability DistributionNon-Gaussian Probability DistributionNon-Gaussian Probability DistributionNon-Gaussian Probability Distribution

• A non-Gaussian probability density function A non-Gaussian probability density function can be modelled by a Mixture of Gaussians can be modelled by a Mixture of Gaussians (MOG):(MOG):

• A non-Gaussian probability density function A non-Gaussian probability density function can be modelled by a Mixture of Gaussians can be modelled by a Mixture of Gaussians (MOG):(MOG):

Mixing proportion - positive and sums to one

Page 20: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Mixing ProportionsMixing ProportionsMixing ProportionsMixing Proportions

• The mixing proportionThe mixing proportion kk represents the prior represents the prior

probability of a voxel being drawn from classprobability of a voxel being drawn from class kk - irrespective of its intensity.- irrespective of its intensity.

• So:So:

• The mixing proportionThe mixing proportion kk represents the prior represents the prior

probability of a voxel being drawn from classprobability of a voxel being drawn from class kk - irrespective of its intensity.- irrespective of its intensity.

• So:So:

Page 21: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Non-Gaussian Intensity Non-Gaussian Intensity DistributionsDistributions

Non-Gaussian Intensity Non-Gaussian Intensity DistributionsDistributions

• Multiple Gaussians per tissue class allow Multiple Gaussians per tissue class allow non-Gaussian intensity distributions to non-Gaussian intensity distributions to be modelled.be modelled.

• Multiple Gaussians per tissue class allow Multiple Gaussians per tissue class allow non-Gaussian intensity distributions to non-Gaussian intensity distributions to be modelled.be modelled.

Page 22: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Probability of Whole DatasetProbability of Whole DatasetProbability of Whole DatasetProbability of Whole Dataset

• If the voxels are assumed to be If the voxels are assumed to be independent, then the probability of the independent, then the probability of the whole image is the product of the whole image is the product of the probabilities of each voxel:probabilities of each voxel:

• It is often easier to work with negative It is often easier to work with negative log-probabilities:log-probabilities:

• If the voxels are assumed to be If the voxels are assumed to be independent, then the probability of the independent, then the probability of the whole image is the product of the whole image is the product of the probabilities of each voxel:probabilities of each voxel:

• It is often easier to work with negative It is often easier to work with negative log-probabilities:log-probabilities:

Page 23: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Modelling a Bias FieldModelling a Bias FieldModelling a Bias FieldModelling a Bias Field

• A bias field is included, such that the A bias field is included, such that the required scaling at voxelrequired scaling at voxel ii,, parameterised byparameterised by , is, is ii(())..

• Replace the means byReplace the means by kk//ii(())

• Replace the variances byReplace the variances by ((kk//ii(())))22

• A bias field is included, such that the A bias field is included, such that the required scaling at voxelrequired scaling at voxel ii,, parameterised byparameterised by , is, is ii(())..

• Replace the means byReplace the means by kk//ii(())

• Replace the variances byReplace the variances by ((kk//ii(())))22

Page 24: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Modelling a Bias FieldModelling a Bias FieldModelling a Bias FieldModelling a Bias Field

• After rearranging:After rearranging:• After rearranging:After rearranging:

()y y ()

Page 25: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Tissue Probability MapsTissue Probability MapsTissue Probability MapsTissue Probability Maps

• Tissue probability maps (TPMs) are used Tissue probability maps (TPMs) are used instead of the proportion of voxels in each instead of the proportion of voxels in each Gaussian as the prior.Gaussian as the prior.

• Tissue probability maps (TPMs) are used Tissue probability maps (TPMs) are used instead of the proportion of voxels in each instead of the proportion of voxels in each Gaussian as the prior.Gaussian as the prior.

ICBM Tissue Probabilistic Atlases. These tissue probability maps are kindly provided by the International Consortium for Brain Mapping, John C. Mazziotta and Arthur W. Toga.

Page 26: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

““Mixing Proportions”Mixing Proportions”““Mixing Proportions”Mixing Proportions”

• Tissue probability maps Tissue probability maps for each class are for each class are available.available.

• The probability of The probability of obtaining classobtaining class kk at voxelat voxel ii, given weights, given weights is then:is then:

Page 27: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Deforming the Tissue Probability Deforming the Tissue Probability MapsMaps

Deforming the Tissue Probability Deforming the Tissue Probability MapsMaps

• Tissue probability Tissue probability images are images are deformed deformed according to according to parametersparameters ..

• The probability of The probability of obtaining classobtaining class kk at voxelat voxel ii, given , given weightsweights and and parametersparameters is is then:then:

Page 28: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

The Extended ModelThe Extended ModelThe Extended ModelThe Extended Model

• By combining the modifiedBy combining the modified P(cP(cii=k|=k|)) and and P(yP(yii||

ccii=k,=k,)), , the overall objective functionthe overall objective function ( (EE) ) becomes:becomes:

The Objective Function

Page 29: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

OptimisationOptimisationOptimisationOptimisation

• The “best” parameters are those that The “best” parameters are those that minimise this objective function.minimise this objective function.

• Optimisation involves finding them.Optimisation involves finding them.• Begin with starting estimates, and Begin with starting estimates, and

repeatedly change them so that the repeatedly change them so that the objective function decreases each time.objective function decreases each time.

Page 30: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Schematic of optimisationSchematic of optimisationSchematic of optimisationSchematic of optimisation

Repeat until convergence...Repeat until convergence...

HoldHold ,, ,, 22 andand constant, and minimiseconstant, and minimise EE w.r.t.w.r.t. - - Levenberg-Marquardt strategy, usingLevenberg-Marquardt strategy, using dE/ddE/d andand dd22E/dE/d22

HoldHold ,, ,, 22 andand constant, and minimiseconstant, and minimise EE w.r.t.w.r.t. - - Levenberg-Marquardt strategy, usingLevenberg-Marquardt strategy, using dE/ddE/d andand dd22E/dE/d22

HoldHold andand constant, and minimiseconstant, and minimise EE w.r.t.w.r.t. ,, andand 22

-Use an Expectation Maximisation (EM) strategy.-Use an Expectation Maximisation (EM) strategy.

endend

Repeat until convergence...Repeat until convergence...

HoldHold ,, ,, 22 andand constant, and minimiseconstant, and minimise EE w.r.t.w.r.t. - - Levenberg-Marquardt strategy, usingLevenberg-Marquardt strategy, using dE/ddE/d andand dd22E/dE/d22

HoldHold ,, ,, 22 andand constant, and minimiseconstant, and minimise EE w.r.t.w.r.t. - - Levenberg-Marquardt strategy, usingLevenberg-Marquardt strategy, using dE/ddE/d andand dd22E/dE/d22

HoldHold andand constant, and minimiseconstant, and minimise EE w.r.t.w.r.t. ,, andand 22

-Use an Expectation Maximisation (EM) strategy.-Use an Expectation Maximisation (EM) strategy.

endend

(Iterated Conditional Mode)

Page 31: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Levenberg-Marquardt OptimisationLevenberg-Marquardt OptimisationLevenberg-Marquardt OptimisationLevenberg-Marquardt Optimisation

• LM optimisation is used for the nonlinear LM optimisation is used for the nonlinear registration and bias correction components.registration and bias correction components.

• Requires first and second derivatives of the Requires first and second derivatives of the objective function (objective function (EE).).

• ParametersParameters andand are updated byare updated by

• Increase Increase to improve stability (at expense of to improve stability (at expense of decreasing speed of convergence).decreasing speed of convergence).

Page 32: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

EM is used to update EM is used to update , , 22 and and EM is used to update EM is used to update , , 22 and and

For iteration For iteration (n)(n), alternate between:, alternate between:– E-stepE-step: Estimate belonging probabilities by:: Estimate belonging probabilities by:

– M-stepM-step: Set : Set (n+1)(n+1) to values that reduce: to values that reduce:

Page 33: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Voxels are assumed independent!

Page 34: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Hidden Markov Random FieldHidden Markov Random FieldHidden Markov Random FieldHidden Markov Random Field

Voxels are NOT independent:GM voxels are surrounded by other GM voxels, at least on one side.

Model the intensity and classification of the image voxels by 2 random field:

• a visible field y for the intensities• a hidden field c for the classifications

Modify the cost function E:

And, at each voxel, the 6 neighbouring voxels are used to to build Umrf, imposing local spatial constraints.

Page 35: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Hidden Markov Random FieldHidden Markov Random FieldHidden Markov Random FieldHidden Markov Random Field

T1 image T2 image

Page 36: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Hidden Markov Random FieldHidden Markov Random FieldHidden Markov Random FieldHidden Markov Random Field

White matter

T1 & T2: MoG + hmrf T1 only: MoG only

Page 37: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Hidden Markov Random FieldHidden Markov Random FieldHidden Markov Random FieldHidden Markov Random Field

Gray matter

T1 & T2: MoG + hmrf T1 only: MoG only

Page 38: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Hidden Markov Random FieldHidden Markov Random FieldHidden Markov Random FieldHidden Markov Random Field

CSF

T1 & T2: MoG + hmrf T1 only: MoG only

Page 39: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

PerspectivesPerspectivesPerspectivesPerspectives

•Multimodal segmentation :1 image is good but 2 is better ! Model the joint histogram using multi-

dimensional normal distributions.

•Tumour detection :contrasted images to modify the prior imagesautomatic detection of outliers ?

Page 40: SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006

Thank you for your attention !