markov random field: a brief introduction

Post on 02-Feb-2016

76 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Markov random field: A brief introduction. Tzu-Cheng Jen Institute of Electronics, NCTU 2007-03-28. Outline. Neighborhood system and cliques Markov random field Optimization-based vision problem Solver for the optimization problem. Neighborhood system and cliques. Prior knowledge. - PowerPoint PPT Presentation

TRANSCRIPT

1

Markov random field: A brief introduction

Tzu-Cheng Jen

Institute of Electronics, NCTU

2007-03-28

2

Outline

Neighborhood system and cliques

Markov random field

Optimization-based vision problem

Solver for the optimization problem

3

Neighborhood system and cliques

4

Prior knowledge

In order to explain the concept of the MRF, we first introduce following definition:

1. i: Site (Pixel)

2. Ni: The neighboring point of i

3. S: Set of sites (Image)

4. fi: The value at site i (Intensity)

f1 f2 f3

f4 fi f6

f7 f8 f9

A 3x3 imagined image

5

Neighborhood system

The sites in S are related to one another via a neighborhood system. Its definition for S is defined as:

where Ni is the set of sites neighboring i.

The neighboring relationship has the following properties: (1) A site is not neighboring to itself

(2) The neighboring relationship is mutual f1 f2 f3

f4 fi f6

f7 f8 f9

' 'i ii N i N

6

Neighborhood system: Example

First order neighborhood system

Second order neighborhood system

Nth order neighborhood system

7

Neighborhood system: Example

The neighboring sites of the site i are m, n, and f.

The neighboring sites of the site j are r and x

8

Clique

A clique C is defined as a subset of sites in S. Following are some examples

9

Clique: Example

Take first order neighborhood system and second order neighborhood for example:

Neighborhood system Clique types

10

Markov random field

11

Markov random field (MRF)

View the 2D image f as the collection of the random variables (Random field)

A random field is said to be Markov random field if it satisfies following properties

Image configuration f

f1 f2 f3

f4 fi f6

f7 f8 f9

{ }

(1) ( ) 0, (Positivity)

(2) ( | ) ( | ) (Markovianity)i S i i Ni

P f f

P f f P f f

F

12

Gibbs random field (GRF) and Gibbs distribution

A random field is said to be a Gibbs random field if and only if its configuration f obeys Gibbs distribution, that is:

Image configuration f

f1 f2 f3

f4 fi f6

f7 f8 f91 2

1 2 '{ } { , '}

1 2 '{ } { } '

( ) ( ) ( ) ( , ) .....

( ) ( , ) .....i

c i i ic C i C i i C

i i ii S i S i N

U f V f V f V f f

V f V f f

1( )1( )

U fTP f Z e

U(f): Energy function; T: Temperature Vi(f): Clique potential

Design U for different applications

13

Markov-Gibbs equivalence

Hammersley-Clifford theorem: A random field F is an MRF if and only if F is a GRF

Proof(<=): Let P(f) be a Gibbs distribution on S with the neighborhood system N.

f1 f2 f3

f4 fi f6

f7 f8 f9

A 3x3 imagined image

( )

{ } ( '){ }

'

( )( | )

( )

cc C

cc C

i

V f

i S i V fS i

f

P f eP f f

P fe

{ }( | ) ( | ) i S i i NiP f f P f f

14

Markov-Gibbs equivalence

Divide C into two set A and B with A consisting of cliques containing i and B cliques not containing i:

A 3x3 imagined image

f1 f2 f3

f4 fi f6

f7 f8 f9

( ) ( ) ( )

{ } ( ') ( ') ( ')

''

( )

( ')

'

[ ][ ]( | )

{[ ][ ]}

[ ] ( | )

{[ ]}

c c cc C c A c B

c c cc C c A c B

ii

cc A

cc A

i

V f V f V f

i S i V f V f V f

ff

V f

i NiV f

f

e e eP f f

e e e

eP f f

e

15

Optimization-based vision problem

16

Denoising

Noisy signal d denoised signal f

17

MAP formulation for denoising problem

The problem of the signal denoising could be modeled as the MAP estimation problem, that is,

arg max{ ( | )}

By Baye's rule:

arg max{ ( | ) ( )}

:

:

f

f

f p f d

f p d f p f

f Unknown data

d Observed data

(Prior model)

(Observation

model)

18

MAP formulation for denoising problem

Assume the observation is the true signal plus the independent Gaussian noise, that is

Under above circumstance, the observation model could be expressed as

2, (0, )i i i id f e e N

2 2

1

( ) / 2( | )

2 2

1 1( | )

2 2

m

i i ii

f dU d f

m m

i ii m i m

p d f e e

U(d|f): Likelihood energy

19

MAP formulation for denoising problem

Assume the unknown data f is MRF, the prior model is:

Based on above information, the posteriori probability becomes

1( )1( )

U fTP f Z e

2 2

1

( )( ) / 21

2

1( | ) ( | )* ( ) *

2

m

i i ii

U ff dT

m

ii m

p f d P d f P f e Z e

20

MAP formulation for denoising problem

The MAP estimator for the problem is:

2 2

1

( )( ) / 21

2

2 2

1

arg max{ ( | )} arg max{ ( | ) ( )}

1arg max{ * }

2

arg min{ ( ) / 2 ( )}

arg min{ ( | ) ( )}

m

i i ii

f f

U ff dT

f m

ii m

m

f i i ii

f

f p f d p d f p f

e Z e

f d U f

U d f U f

?

21

MAP formulation for denoising problem

Define the smoothness prior:

Substitute above information into the MAP estimator, we could get:

21( ) ( )i i

i

U f f f

22

121 1

arg max{ ( | )} arg min{ ( | ) ( )}

( )arg min{ ( ) }

2

f f

m mi i

f i ii i

f p f d U d f U f

f df f

Observation model (Similarity measure)

Prior model (Reconstruction constrain)

22

Super-resolution

Super-Resolution (SR): A method to reconstruct high-resolution images/videos from low-resolution images/videos

23

Super-resolution

Illustration for super-resolution

d(1) d(2) d(3) d(4)

f(1)

Use the low-resolution frames to reconstruct the high resolution frame

24

MAP formulation for super-resolution problem

The problem of the super-resolution could be modeled as the MAP estimation problem, that is,

(1) (2) ( )

(1) (2) ( )

( )

arg max{ ( | ..... )}

By Bayes rule:

arg max{ ( ..... | ) ( )}

:

:

Mf

Mf

i

f p f d d d

f p d d d f p f

f High resolution image

d Low resolution image

(Prior model) (Observation model)

25

MAP formulation for super-resolution problem

The conditional PDF can be modeled as the Gaussian distribution if the noise source is Gaussian noise

We also assume the prior model is joint Gaussian distribution

(1) (2) ( ) (1) (2) ( )( ..... | ) exp( ( , ,...., , ))M Mp d d d f H d d d f

1( ) exp( ( ) ( ))

:

: var

Tp f f M f M

where

M Mean of f

Co iance matrix

26

MAP formulation for super-resolution problem

Substitute above relation into the MAP estimator, we can get following expression:

(1) (2) ( )

(1) (2) ( ) 1

(1) (2) ( ) 1

arg max{ ( ..... | ) ( )}

arg max{exp{-( ( , ,...., , ) ( ) ( ))}}

arg min{ ( , ,...., , ) ( ) ( ))} arg min ( )

Mf

M Tf

M Tf f

f p d d d f p f

H d d d f f M f M

H d d d f f M f M E f

(Prior model) (Observation model)

27

Solver for the optimization problem

28

The solver of the optimization problem

In this section, we will introduce different approaches for solving the optimization problem: 1. Brute-force search (Global extreme)

2. Gradient descent search (Local extreme, Usually)

3. Genetic algorithm (Global extreme)

4. Simulated annealing algorithm (Global extreme)

29

Gradient descent algorithm (1)

30

Gradient descent algorithm (2)

31

Simulation: SR by gradient descent algorithm

Use 6 low resolution frames (a)~(f) to reconstruct the high resolution frame (g)

32

Simulation: SR by gradient descent algorithm

33

The problem of the gradient descent algorithm

Gradient descent algorithm may be trapped into the local extreme instead of the global extreme

34

Genetic algorithm (GA)

The GA includes following steps:

35

Simulated annealing (SA)

The SA includes following steps:

top related