deconv comparisons
Post on 06-Apr-2018
232 views
TRANSCRIPT
-
8/2/2019 Deconv Comparisons
1/24
Image deconvolution, denoising
and compression
T.E. Gureyev and Ya.I.Nesterets
15.11.2002
-
8/2/2019 Deconv Comparisons
2/24
CONVOLUTION
),,(),)((),( yxNyxPIyxD +!
=
,'')','()','(),)(( !! ""=# dydxyyxxPyxIyxPI
whereD(x,y) is the experimental data (registered image),
I(x,y) is the unknown "ideal" image (to be found),
P(x,y) is the (usually known) convolution kernel,
* denotes 2D convolution, i.e.
andN(x,y) is the noise in the experimental data.
In real-life imaging,P(x,y) can be the point-spread function
of an imaging system; noiseN(x,y) may not be additive.
Poisson noise: )}/(),({Poisson),( 22 avavavav IyxIIyxD !!=
-
8/2/2019 Deconv Comparisons
3/24
EXAMPLE OF CONVOLUTION
* =
3%Poisson noise
10%Poisson noise
-
8/2/2019 Deconv Comparisons
4/24
DECONVOLUTION PROBLEM
(*)),(),)((),( yxNyxPIyxD+!=
Deconvolution problem: givenD,PandN, findI
(i.e. compensate for noise and the PSF of the imaging system)
Blind deconvolution:Pis also unknown.
Equation (*) is mathematically ill-posed, i.e. its solution maynot exist, may not be unique and may be unstable with respect
to small perturbations of "input" dataD,PandN. This is easy
to see in the Fourier representation of eq.(*)
)(#),(),(),(),( !"!"!"!" NPID +=1) Non-existence: ?),(0),(),(,0),( =!==" #$#$#$#$ INPD
2) Non-uniqueness: anyINDP =!== ),(),(),(,0),( "#"#"#"#
3) Instability: !"#"#"#!"# /)],(),([),(),( NDIP $=%=
-
8/2/2019 Deconv Comparisons
5/24
A SOLUTION OF THE DECONVOLUTION PROBLEM
)(#),(),(),(),( !"!"!"!" NPID +=
)(!),(/)],(),([),( !"!"!"!" PNDI #=
We assume that 0),( !"#P (otherwise there is a genuine loss
of information and the problem cannot be solved). Then
eq.(!) provides a nice solution at least in the noise-free case(as in reality the noise cannot be subtracted exactly).
(*)-1 =
Convolution:
Deconvolution:
-
8/2/2019 Deconv Comparisons
6/24
NON-LOCALITY OF (DE)CONVOLUTION
!! ""=# '')','()','(),)(( dydxyyxxPyxIyxPI
The value of convolutionI*Pat point (x,y) depends on all
those valuesI(x',y') within the vicinity of (x,y) where
P(x-x',y-y')0. The same is true for deconvolution.
Convolution with a
single pixel wide
mask at the edges
Deconvolution (the error is
due to the non-locality and
the 1-pixel wide mask)
-
8/2/2019 Deconv Comparisons
7/24
EFFECT OF NOISE
(*)-1 =
3% noise in the
experimental data with regularization
without regularization
In the presence of noise, the ill-posedness of deconvolution
leads to artefacts in deconvolved images:
The problem can be alleviated with the help of regularization
)!(!]|),(/[|),(),(),( 2* !"#"#"#"# += PPDI
PDI /),( !"#
-
8/2/2019 Deconv Comparisons
8/24
EFFECT OF NOISE. II
(*)-1 =
10% noise in the
experimental data with regularization
without regularization
In the presence of stronger noise, regularization may not be
able to deliver satisfactory results, as the loss of high frequency
information becomes very significant. Pre-filtering (denoising)
before deconvolution can potentially be of much assistance.
-
8/2/2019 Deconv Comparisons
9/24
DECONVOLUTION METHODS
Two broad categories
(1)Direct methods
Directly solve the inverse problem (deconvolution).
Advantages: often linear, deterministic, non-iterative and fast.
Disadvantages: sensitivity to (amplification of) noise, difficulty
in incorporating available a priori information.Examples: Fourier (Wiener) deconvolution, algebraic inversion.
(2)Indirect methods
Perform (parametric) modelling, solve the forward problem
(convolution) and minimize a cost function.
Disadvantages: often non-linear, probabilistic, iterative and slow.
Advantages: better treatment of noise, easy incorporation of
available a priori information.
Examples: least-squares fit, Maximum Entropy, Richardson-
Lucy, Pixon
-
8/2/2019 Deconv Comparisons
10/24
-
8/2/2019 Deconv Comparisons
11/24
ITERATIVE WIENER DECONVOLUTION
The method proposed by A.W.Stevenson
Builds deconvolution as
Requires 2 FFTs of the input image at each iteration
Can use large (and/or variable) regularization parameter
Convolution with 10% noise
!
"
=
"#+=
1
0
)()(
),(),(
n
k
k
kn
DPIWDWI $$
-
8/2/2019 Deconv Comparisons
12/24
Richardson-Lucy (RL) algorithm
If PSF is shift invariant then RL iterative algorithm is written as
I(i+1) =I(i) Corr(D / (I(i) P),P)
Correlation of two matrices is Corr(g, h)n,m=ijgn+i,m+jhi,j
Usually the initial guess is uniform, i.e.I(0) = D
Advantages:
1. Easy to implement (no non-linear optimisation is needed)
2. Implicit object size (In(0) = 0 In
(i) = 0 i)and positivity constraints (In
(0) > 0 In(i) >0 i)
Disadvantages:
1. Slow convergence in the absence of noise and instability in the
presence of noise
2. Produces edge artifacts and spurious "sources"
-
8/2/2019 Deconv Comparisons
13/24
No noise
50 RL
iterations
1000 RL
iterations
Original
ImageData
RL
-
8/2/2019 Deconv Comparisons
14/24
3% noise
20 RL
iterations
6 RL
iterations
Original
ImageData
RL
-
8/2/2019 Deconv Comparisons
15/24
Joint probability of two eventsp(A,B) = p(A)p(B |A)
p(D,I,M) =p(D |I,M)p(I,M) =p(D |I,M)p(I|M) p(M)
=p(I|D,M)p(D,M) =p(I|D,M)p(D |M)p(M)
I image M model D datap(I|D,M) = p(D |I,M)p(I|M) /p(D |M)
p(I,M|D) = p(D |I,M)p(I|M) p(M) /p(D)
p(I|D ,M) orp(I,M|D) inference
p(I,M), p(I|M) andp(M) priors
p(D |I,M) likelihood function
Bayesian methods
(1)
(2)
-
8/2/2019 Deconv Comparisons
16/24
Goodness-of-fit (GOF) and Maximum Likelihood (ML) methods
Assumes image priorp(I|M) = const andresults in maximization
with respect toIof the likelihood functionp(D |I,M)
In the case of Gaussian noise (e.g. instrumentation noise)
p(D |I,M) =Z1 exp(2 / 2) (standard chi-square distribution)
where 2
= k( (IP)kDk)
2
/ k2
, Z= k(2k2
)1/2
In the case of Poisson noise (count statistics noise)
! Without regularization this approach typically produces images
with spurious features resulting from over-fitting of the noisy
data
( ) ( )( )!
"#"=
kk
k
D
k
D
PIPIMIDp
k exp),|(
-
8/2/2019 Deconv Comparisons
17/24
3% noise
Original Image
Data
No noise
Deconvolution
GOF
-
8/2/2019 Deconv Comparisons
18/24
Maximum Entropy (ME) methods
(S.F.Gull and G.J.Daniell; J.Skilling and R.K.Bryan)
ME principle states that a priori the most likely image is the onewhich is completely flat
Image prior: p(I|M) = exp(S),
where S= -ipilog2pi is the image entropy, pi =Ii / iIi
GOF term (usually): p(D |I,M) =Z1 exp(2 / 2)
The likelihood function: p(I|D,M) exp(L + S), L = 2 / 2
+ tends to suppress spurious sources in the data
- can cause over-flattenning of the image
! the relative weight of GOF and entropy terms is crucial
-
8/2/2019 Deconv Comparisons
19/24
Original Image Data (3% noise)
ME deconvolution
= 2 = 5 = 10
p(I|D,M) exp(-L + S)
-
8/2/2019 Deconv Comparisons
20/24
Pixon method
(R.C.Puetter and R.K.Pina, http://www.pixon.com/)
Image prior is p(I|M) = p({Ni}, n,N) =N! / (nN
Ni!)whereNi is the number of units of signal (e.g. counts) in the i-th
element of the image, n is the total number of elements,
N= iNi is the total signal in the image
Image prior can be maximized by
1. decreasing the total number of cells, n, and
2. making the {Ni} as large as possible.
I(x) = (IpseudoK)(x) = dyK( (xy) / (x) )Ipseudo(y)Kis the pixon shape function normalized to unit volume
Ipseudo is the pseudo image
-
8/2/2019 Deconv Comparisons
21/24
3% noise
Original Image
Data
No noise
Deconvolution
Pixon deconvolution
-
8/2/2019 Deconv Comparisons
22/24
Original RL GOF
No noise
Pixon Wiener IWiener
-
8/2/2019 Deconv Comparisons
23/24
Original
3% noise
ME
Pixon IWiener Wiener
RL
-
8/2/2019 Deconv Comparisons
24/24
DOES A METHOD EXIST CAPABLE OF BETTER
DECONVOLUTION IN THE PRESENCE OF NOISE ???
1) Test images can be found on "kayak" in
"common/DemoImages/DeBlurSamples" directory
2) Some deconvolution routines have been implemented
online and can be used with uploaded images. These
routines can be found at "www.soft4science.org" in the
"Projects On-line interactive services Deblurring on-
line" area