appendix: solutions to selected exercises - springer978-3-642-73341-3/1.pdf · appendix: solutions...

14
Appendix: Solutions to Selected Exercises Exercise 2.1. According to Theorems 2.3,4, f(t + h)X(t + h) - f(t)X(t) h = [f(t + - f(t) ] X(t + h) + f(t) [ X(t + - X(t) ] f'(t)X(t) + f(t)X'(t) . Exercise 2.2. a) X and E {X(s)X(t)} are presented graphically below. 0 ... Z3 x: 1 } } 0 1 1 '8 .. (0 ,1) 0 Z, J 1 ,. I I Zl ] 1 E{X(s)X(t)} : 0 ---- o 0 0 ,p-- 0 (1,1) (0,0) (1,0) b) It follows directly from (a). c) If h = k , Fig. A.1. X and E {X(s)X(t)} L1L1 E {X(O)X(O)} = = = h h h 2 h 2 h 2 h 2 which does not converge as h O.

Upload: duongtram

Post on 06-Sep-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Appendix: Solutions to Selected Exercises - Springer978-3-642-73341-3/1.pdf · Appendix: Solutions to Selected Exercises 159 Exercise 2.3. Let us show only that 1 ff(t)dX(t) o does

Appendix: Solutions to Selected Exercises

Exercise 2.1. According to Theorems 2.3,4,

f(t + h)X(t + h) - f(t)X(t) h

= [f(t + ~ - f(t) ] X(t + h) + f(t) [ X(t + ~ - X(t) ]

~ f'(t)X(t) + f(t)X'(t) .

Exercise 2.2. a) X and E {X(s)X(t)} are presented graphically below.

0 ... Z3 x: 1 } }

0 1 1 '8 ..

(0,1)

0

Z, J 1 ,.

I I

Zl ] 1

E{X(s)X(t)} : 0 ~----- ----

o ~--~--' 0

O:-~J 0 ,p-- 0

(1,1)

(0,0) (1,0)

b) It follows directly from (a). c) If h = k ,

Fig. A.1. X and E {X(s)X(t)}

L1L1 E {X(O)X(O)} = ~E{X(h)-X(O)}2 = ~EX2(h) = ~ h h h2 h2 h2 h2

which does not converge as h ~ O.

Page 2: Appendix: Solutions to Selected Exercises - Springer978-3-642-73341-3/1.pdf · Appendix: Solutions to Selected Exercises 159 Exercise 2.3. Let us show only that 1 ff(t)dX(t) o does

Appendix: Solutions to Selected Exercises 159

Exercise 2.3. Let us show only that 1

ff(t)dX(t) o

does not exist. Consider a convergent sequence {Pn}neN of partitions of [0, 1] such that in each Pn the intermediate points do not coincide with the subdivi­sion points and such that the point 112 is a subdivision point of Pn if n is odd, but not if n is even. We can show that, in this case,

_ {(1)(0 - Z) = - Z if n is odd Sf, x (Pn) - 0 if n is even.

It follows that

{Sf,X(Pn)}neN = - Z, 0, -Z, 0, -Z, 0, ....

Hence, it is not a Cauchy sequence in L2 (Q).

Exercise 2.4. It is seen that for each partition Pn of [0, 1],

Sf,X(Pn) = Ic {X (ti) - X(ti-,)} = c {X(t) - X(O)} . i

Hence, the integral 1

f f(t)dX(t) o

exists and 1

ff(t)dX(t) = c{X(t) - X(O)} . o

Regarding 1

fX(t)df(t) , o

all Riemann-Stieltjes sums equal zero. Hence, the integral exists and 1

f X(t) df(t) = 0 . o

Exercise 2.6. All Riemann-Stieltjes sums corresponding to 1

f f(t)dX(t) o

are zero and those corresponding to 2

f f(t)dX(t) 1

Page 3: Appendix: Solutions to Selected Exercises - Springer978-3-642-73341-3/1.pdf · Appendix: Solutions to Selected Exercises 159 Exercise 2.3. Let us show only that 1 ff(t)dX(t) o does

160 Appendix: Solutions to Selected Exercises

equal C. Hence, these integrals exist (and are equal to zero and C, respec­tively).

For 2

f f(t)dX(t) o

let {Pn}neN be a convergent sequence of partitions of [0, 2] such that none of Pn contains 1 as a subdivision point and such that the intermediate point in the partition interval containing 1 is chosen to be left of 1 if n is odd, and right of 1 if n is even. Then,

S ( ) = {O if n is odd f,X Pn C if n is even .

Hence, {Sf,X(Pn)}neN does not converge and the integral 2

f f(t)dX(t) o

does not exist.

Exercise 2.8. Let P be a partition of ] with subdivision points to> t1> •• • , tn such that

a = to < tl ... < tn = b .

Set

L1 j X = X(t j ) - X(t j _ 1) , i = 1, . . . , n .

Then

L1 j EX = E {L1 j X}

and, using the second inequality of (1.67), n n

VEX(p) = L lL1 j EXI ~ L IIL1 j XI! = Vx(p) i= 1 i= 1

and we have the assertion.

Exercise 2.11. Let to, tl> . .. , tn be the subdivision points of a partition P of] such that

a = to < tl < . .. < tn = b .

Set

Cx(s, t) = E {X(s)X(t)} , (s, t) E ]2

met) = EX(t)

X(t) = met) + yet) entailing EY(t) = 0 , t E]

Page 4: Appendix: Solutions to Selected Exercises - Springer978-3-642-73341-3/1.pdf · Appendix: Solutions to Selected Exercises 159 Exercise 2.3. Let us show only that 1 ff(t)dX(t) o does

Appendix: Solutions to Selected Exercises 161

LliX = X(ti) - X(ti- 1)

Llim = m (ti) - m (ti- 1)

LliY = yet;) - Y(ti- 1), i = 1, . . . ,n .

Then

LliX = Llim + LliY' and if

rij = [ti-I> t;] x [tj_1> tjl, then

LILI ex = E {LliXLljX} = E {(Llim + LliY)(Lljm + LljY)} rij

= LlimLljm + E {LliYLljY} (since EY(t) = 0).

Now we define the set of numbers el> ... , en as follows:

{ 1 if Llim ~ 0 ei = -1 if Llim < 0 .

Then, n n n n

Vcx(p x p) = L L IE {LliXLljX} I ~ L L (E {LliXLljX}) eiej i=lj=1 i=lj=1

n n n n

= L L (LlimLljmeiej) + L L (E {LliYLljY}) eiej i=1 j=1 i=1 j=1

= (J1ILlimIY + ELtl (LliY) er ~ (t ILlimlY

= {Vm(p)}2 .

The assertion follows by taking limits if p runs through a convergent se­quence of partitions of I.

Exercise 2.12. We have, formally,

E U F(u)dW(u) [i G(V)dW(V)r}

= EU F(u)W(u)du iWT(V)GT(V)dV}

S t

= f du f dv F(u) E {W(u) WT(V)} GT(v) o 0

= i [1 F(u) B (u) 0 (u - v) dU] GT(v) dv

s

= f F(v) B(v) GT(v)dv o

if O::S:;s::s:;t::s:;T.

Page 5: Appendix: Solutions to Selected Exercises - Springer978-3-642-73341-3/1.pdf · Appendix: Solutions to Selected Exercises 159 Exercise 2.3. Let us show only that 1 ff(t)dX(t) o does

162 Appendix: Solutions to Selected Exercises

Exercise 2.14. Define

X(w, t) = v;;;+t , (w, t) E [0, 1] x [0, T] .

Then the sample derivative is

1 X' (w t) = -----,--, 2yw + t

in the given domain, outside the point (0, 0). Hence, the samples are differ­entiable on [0, T] with probability one.

Now, X is of second order since 1

EX2 = f(w + t)dw < 00

o

and it is m.s. differentiable if t > 0 with derivative 1/(2 v;;;+t) . However, X' (w, t) is not of second order at t = 0 since

E {X' (w, 0)}2 = } -41 dw o w

is not finite . It thus follows from Theorem 2.36 that X is not m.s. differenti­able at t = O.

Exercise 3.1. Let Xi E L2 (.0) be the components of X and mij E 1R the elements of M. (X, Y)N is an inner product in Lf (.0) because the following conditions hold for all X, Y, Z in Lf (.0) and for all a E 1R: a) since M is symmetric,

(X, Y)N = E {XTMY} = E {yTMX} = (Y, X)N

b) (aX, Y)N = a (X, Y)N c) (X + Y, Z)N = (X, Z)N + (Y, Z)N d) the symmetry of M implies that there is an orthogonal matrix U such that

M = UT diag(Ab" . ,AN) U .

The eigenvalues Ai of M are real and positive since M is positive. Hence, with Uij denoting the elements of U,

(X, X)N = E {XTMX} = E {XTUT diag(Ab" .,AN) UX}

unless N

= i~l Ai E L~l UijXJ > 0

L UijX; = 0 j= 1

Page 6: Appendix: Solutions to Selected Exercises - Springer978-3-642-73341-3/1.pdf · Appendix: Solutions to Selected Exercises 159 Exercise 2.3. Let us show only that 1 ff(t)dX(t) o does

Appendix: Solutions to Selected Exercises 163

for all i, i.e.,

UX = 0, X = 0 a.s.

Exercise 3.3. According to (3.57), we have formally

X(t) = cJ>(t){C + [cJ>-l(S)W(S)dS}

= cJ>(t){C + [cJ>-l(S) ~ W(S)ds}

= cJ>(t){C + [cJ>-l(S)dW(S)}.

Exercise 4.1. If U, V and Ware subspaces of L2 (Q) such that U 1 V and W = U + V(= U EB V), and if ~u, ~v and ~w are the orthogonal projec­tion operators of L2 (Q) onto U, V and W, respectively, then

~w = ~u + ~v, i.e., for any X E L2 (Q), ~wX = ~uX + ~vx. By definition, X - ~uX E ul, ~vX EVe U1 , and hence X - ~uX - ~vX E ul, or

X - (~uX + ~vX) 1 U .

Analogously,

X - (~uX + ~vX) 1 V and so

X - (~uX + ~vX) 1 U + V = W .

As also

~uX + ~vX E U + V = W ,

we have

~wX = ~uX + ~vX .

Applying this result to [Z] = [D] EB [Zo] and with X = EX + Xo, we obtain

~zX = ~zEX + ~zXo (A.I)

where EX, looked upon as a degenerate random variable, is an element of [D]. Hence,

~DEX= EX.

Page 7: Appendix: Solutions to Selected Exercises - Springer978-3-642-73341-3/1.pdf · Appendix: Solutions to Selected Exercises 159 Exercise 2.3. Let us show only that 1 ff(t)dX(t) o does

164 Appendix: Solutions to Selected Exercises

Also, since [D] 1 [Zo],

t7}z EX = 0 . o

As Xo 1 [D],

t7}D Xo = 0

and so finally (A.I) gives

t7}z X = EX + t7}z Xo . o

Exercise 4.5. Let V c L2 (Q) be a finite-dimensional subspace, let ifJl> . . . , ifJN be an orthonormal base of V and let

be a Cauchy sequence in V. For each n E N, there are real numbers Cnl, • .. , CnN such that

fn = Cnl ifJl + ... + CnN ifJN .

It is seen that

and

{CndneN,· ··, {CnN}neN

are Cauchy sequences in JR , hence convergent with limits, say CI> .•. , CN,

respectively. It follows that fn converges to ClifJl + . . . + CNifJN E Vas n _ 00. Hence V

is closed.

Exercise 4.6. The mapping g; : L2 ([0, tD - H (W, t) is an isometry since, for all f and 9 in L2 ([0, tD,

( f feu) dW(u), f 9 (v) dW(V)) o 0 Q

= Eft f(u)dW(u) !9(V)dW(V)}

1

= ff(s)g(s)ds = (f, g)[o.l] o

due to Theorem 4.9, where bij = I since Wi and Wj are identical to the standard Wiener-Levy process.

Page 8: Appendix: Solutions to Selected Exercises - Springer978-3-642-73341-3/1.pdf · Appendix: Solutions to Selected Exercises 159 Exercise 2.3. Let us show only that 1 ff(t)dX(t) o does

Appendix: Solutions to Selected Exercises 165

Exercise 4.7. For all fin L 2 ([0, tD, see (4.53), I I I

fJ1= ff(s)dZ(s) = ff(s)R(s)ds + ff(s)dW(s) E H(Z, t) . o 0 0

Clearly, ~: L 2 ([0, tD ---+ H(Z, t) is linear. It is shown in D of 4.5 that ~is surjective and 1-1. Hence,

~1 : H(Z, t) ---+ L2 ([0, t])

is defined (and linear since ~ is linear). We shall show that both ~ and ~1 are bounded. Set X = fJ1,f = ~1 X.

Because of the orthogonality of R (u) and W (v) we have

IIXII~ = 11fJ111~ = ,,[f(S)R(S)dS + [f(S)dW(S),,:

= " [f(S)R(S)ds//: + ,,[f(S)dW(S),,: . (A. 2)

From (A.2) it follows that

/I [f(S) dW(S),,: ~ IIXII~ , i.e. ,

I

Ilfllfo,,] = [P(s)ds ~ IIXII~

II ~1 Xllfo,,] ~ (1) IIXII~

and I I I

11fJ111 2 = f ff(u)f(v)E{R(u)R(v)}du dv + ff2(S)ds. o 0 0

And so, if M = maxE{R(u)R(v)} on [0, tf,

11fJ111~ ~ MUf(S)ds r + [f2(S)ds

I t

11fJ111~ ~ Mt ff2(S)ds + fP(s)ds o 0

11fJ111~ ~ (Mt + 1) IIfllfo,l] then (A.3, 4) show the bounded ness of ~-1 and ~, respectively.

(A.3)

(A.4)

Page 9: Appendix: Solutions to Selected Exercises - Springer978-3-642-73341-3/1.pdf · Appendix: Solutions to Selected Exercises 159 Exercise 2.3. Let us show only that 1 ff(t)dX(t) o does

References

Chapter 1

1.1 M. Loeve: Probability Theory (Van Nostrand, New York 1963) 1.2 K. L. Chung: A Course in Probability Theory (Harcourt, Brace and World, New York

1968) 1.3 T. T. Soong: Probabilistic Modeling and Analysis in Science and Engineering (Wiley, New

York 1981) 1.4 B. R. Frieden: Probability, Statistical Optics and Data Testing, Springer Ser. Inf. Sci.,

Vol. 10 (Springer, Berlin, Heidelberg, 1983) 1.5 J. L. Doob: Stochastic Processes (Wiley, New York 1953)

Chapter 2

2.1 M. Loeve: Probability Theory (Van Nostrand, New York 1963) 2.2 T. T. Soong: Random Differential Equations in Science and Engineering (Academic, New

York 1973)

Chapter 3

3.1 E. Hille, R. S. Phillips: Functional Analysis and Semigroups (American Mathematical Society, Providence, RI 1957)

3.2 E . Hille: Lectures on Ordinary Differential Equations (Addison-Wesley, Reading, MA 1969)

3.3 E. Wong, M. Zakai: On the Convergence of Ordinary Integrals to Stochastic Integrals, Ann. Math. Stat. 36, 1560-1564 (1965)

3.4 P. A. Ruymgaart, T. T. Soong: A Sample Treatment of Langevin-Type Stochastic Differ­ential Equations, J. Math. Analysis Appl. 34, 325-338 (1971)

3.5 A. H. Zemanian: Distribution Theory and Transform Analysis (McGraw-Hill, New York 1965)

Chapter 4

4.1 R. E. Kalman, R. S. Bucy: New Results in Linear Filtering and Prediction Theory, Trans. ASME J. Basic Eng. 83, 95-107 (1961)

4.2 E. Hewitt, K. Stromberg: Real and Abstract Analysis. A Modern Treatment of the Theory of Functions of a Real Variable. Graduate Texts in Mathematics, Vol. 25 (Springer, Berlin, Heidelberg, New York 1975)

4.3 T. Kailath: An Innovations Approach to Least-square Estimation - Part I: Linear Filtering in Additive White Noise, IEEE Trans. AC-13, 646-655 (1968)

4.4 A. E. Taylor: Introduction to Functional Analysis (Wiley, New York 1958) 4.5 N. I. Achieser, I. M. Glasmann: Theorie der Linearen Operatoren in Hilbert-Raum

(Akademie, Berlin 1968) 4.6 P. A. Ruymgaart: A Note on the Integral Representation of the Kalman-Bucy Estimate,

Indag. Math. 33, 346-360 (1971)

Page 10: Appendix: Solutions to Selected Exercises - Springer978-3-642-73341-3/1.pdf · Appendix: Solutions to Selected Exercises 159 Exercise 2.3. Let us show only that 1 ff(t)dX(t) o does

168 References

4.7 P. A. Ruymgaart: The Kalman-Bucy Filter and its Behavior with respect to Smooth Pertur­bations of the Involved Wiener-Levy Processes, Ph.D. Dissertation, Technological Univer­sity Delft, Delft (1971)

4.8 R. S. Bucy, P. D. Joseph: Filtering for Stochastic Processes with Applications to Guidance (Wiley-Interscience, New York 1968)

4.9 K. Karhunen: Ober Lineare Methoden in der Wahrscheinlichkeitsrechnung Ann. Ac. Fen­nicae, Ser. A.1, Mathematica-Physica, 73 (1947)

4.10 T. S. Huang (ed.): Two-Dimensional Digital Signal Processing I, Topics in Appl. Phys., Vol. 42 (Springer, Berlin, Heidelberg, New York 1981)

Chapter S

5.1 M. Loeve: Probability Theory (Van Nostrand, Princeton 1955) 5.2 R. S. Liptser, A. N. Shiryayev: Statistics of Random Process I, General Theory, Applica­

tions of Mathematics, Vol. 5 (Springer, Berlin, Heidelberg, New York 1978)

Page 11: Appendix: Solutions to Selected Exercises - Springer978-3-642-73341-3/1.pdf · Appendix: Solutions to Selected Exercises 159 Exercise 2.3. Let us show only that 1 ff(t)dX(t) o does

Subject Index

Almost sure (a.s.) 4 Almost everywhere (a.e.) 104

Banach space 82, 140 Borel algebra 2 Borel function 2 Bounded variation 46, 55

in the strong sense 52 in the weak sense 57

Brownian motion 27

Calculus in m.s. 30 Cauchy sequence 23, 30, 82, 104 Cauchy's inequality 22,23, 31, 104, 137 Centered stochastic process 38 Characteristic function 9 Chebyshev's inequality 24 Complete space 24, 31, 82, 105 Continuity in m.s. 33 Convergence in m.s. 30 Correlation function 25 Correlation (function) matrix 25 Covariance function 25 Covariance (function) matrix 26 Cross correlation function 25 Cross covariance function 25 Cross covariance (function) matrix 26

Degenerate r. v. 7 Degenerate stochastic process 38 Differentiability in m.s. 36 Distance 23, 104 Dynamic system 80,92, 146

Error covariance matrix 149 Event 1

Filtering 80 Fundamental sequence 23 Fundamental solution 89

Gaussian distribution 16 Gaussian manifold 17 Gaussian-Markov vector process 93 Gaussian N-vector 16

Gaussian process 26 Gaussian r. v. 10 General Wiener-Levy N-vector 72

Hilbert space 24, 105, 136

Increment 28 Independence (stochastic) 13-15 Inner product 22, 104, 136 Intermediate point 40

Joint characteristic function 13 Joint probability density function 12 Joint probability distribution function 11

Kailath's innovations approach 116 Kalman-Bucyestimator 101, 116, 128 Kalman-Bucy filtering 80, 100-102, 128,

154 Kalman filter 100

Least squares 101, 114 Lebesgue measure 3 Lebesgue measure space 3 Limit in mean (square) (I.i.m.) 31 Linear hull 27 Linear least-squares estimation 101 Linear minimum variance estimator 101 Liptser and Shiryayev, theorem of X, 155

Marginal distribution function 12 Markov vector process 93 Mathematical expectation 5 Mean 5 Mean square (m.s.) 30 Measurable set 2 Measurable space 2 Measure 2

space 2

Noise 28, 75, 80 Norm 23,81,104,136,137,140 Normal N-vector 16 Normal r.v. 10

Page 12: Appendix: Solutions to Selected Exercises - Springer978-3-642-73341-3/1.pdf · Appendix: Solutions to Selected Exercises 159 Exercise 2.3. Let us show only that 1 ff(t)dX(t) o does

170 Subject Index

Observation 100, 154 noise VI, 100, 154 process 100

Orthogonal 15, 22 projection 101, 114, 115, 128, 154, 155

Partial integration 42 Partition 40, 54 Probability 1, 3

density function 8 distribution function 7 measure 3 space 3

Random experiment Random process 18 Random variable (r.v.) 4 Random vector 11 Realization 18 Reconstruction 102 Refinement 41, 54 Riccati differential equation 152 Riemann integral in m.s. 42 Riemann sum 42 Riemann-Stieltjes (R-S) integral 56 R-S integral in m.s. 41 R-S sum 41, 56

Sample calculus 76 Sample function 18

Sample solution 81 Sample space 1 Second order process 24 Second order r.v. 21 Simple r.v. 4 Solution in m.s. sense 81 Standard Wiener-Levy N-vector 29, 70 Standard Wiener-Levy process 28, 68 State vector 80 Stochastic dynamic system 80, 81 Stochastic independence 13, 14, 15 Stochastic process 18 Subdivision point 40

Total variation 46, 52, 55 Trajectory 18 Tychonoff, theorem of 124

Uniformly m.s. continuity 34

Variance 7 Variation 46, 52, 55

White noise 28, 75, 80 Wiener-Hopf equation 130 Wiener-Levy N-vector 29, 72 Wiener-Levy process 28, 68 Wong and Zakai, theorem of 99

Page 13: Appendix: Solutions to Selected Exercises - Springer978-3-642-73341-3/1.pdf · Appendix: Solutions to Selected Exercises 159 Exercise 2.3. Let us show only that 1 ff(t)dX(t) o does

Springer-Verlag Berlin Heidelberg New York Tokyo

Digital Pattern Recognition Editor: K.S.Fu With contributions by numerous experts 2nd corrected and updated edition. 1980.59 figures, 7 tables. XI, 234 pages. (Communication and Cybernetics, Volume 10). ISBN 3·540-10207-8

Contents: Introduction. - Topics in Statistical Pattern Recog­nition. - Clustering Analysis. - Syntactic (Linguistic) Pattern Recognition. - Picture Recognition. - Speech Recognition and Understanding. - Recent Developments in Digital Pattern Recognition. - Subject Index.

Syntactic Pattern Recognition, Applications Editor: K.S.Fu 1977. 135 figures, 19 tables. XI, 270 pages. (Communication and Cybernetics, Volume 14). ISBN 3-540-07841-X

Contents: K S. Fu: Introduction to Syntactic Pattern Recogni­tion. - S. Horowitz: Peak Recognition in Waveforms. -i.E.Albus: Electrocardiogram Interpretation Using a Stochastic Finite-State Model. - R.DeMori: Syntactic Recog­nition of Speech Patterns. - W. W.Stallings: Chinese Character Recognition. - T.Pavlidis, H.-Y.F.Feng: Shape Discrimination. - R. H. Anderson: Two-Dimensional Mathe­matical Notation. - B.Moayer, KS.Fu: Fingerprint Classifica­tion. -J.M.Brayer, P.H.Swain, KS.Fu: Modeling of Earth Resources Satellite Data. - T. Vamos: Industrial Objects and Machine Parts Recognition.

Computer Processing of Electron Microscope Images Editor: P. W.Hawkes 1980. 116 figures, 2 tables. XIV, 296 pages. (Topics in Current Physics, Volume 13). ISBN 3-540-09622-1

Contents: P. W.Hawkes: Image Processing Based on the Linear Theory of Image Formation. - W. O. Saxton: Recovery of Specimen Information for Strongly Scattering Objects. -i. E. Mellema: Computer Reconstruction of Regular Biological Objects. - W. Hoppe, R. Hegerl: Three-Dimensional Structure Determination by Electron Microscopy (Nonperiodic Speci­mens). - i. Frank: The Role of Correlation Techniques in Computer Image Processing. - R. H. Wade: Holographic Methods in Electron Microscopy. - M.lsaacson, M. Utlaut, D. Kopf' Analog Computer Processing of Scanning Transmis­sion Electron Microscope Images.

Page 14: Appendix: Solutions to Selected Exercises - Springer978-3-642-73341-3/1.pdf · Appendix: Solutions to Selected Exercises 159 Exercise 2.3. Let us show only that 1 ff(t)dX(t) o does

Picture Processing and Digital Filtering Editor: T.S.Huang

2nd corrected and updated edition. 1979. 113 figures, 7 tables. XIII, 297 pages (Topics in Applied Physics, Volume 6) ISBN 3-540-09339-7

Contents: T.S.Huang: Introduction. -H. C. Andrews: Two-Dimensional Transforms. - J. G. F'iasconaro: Two-Dimensional Nonre­cursive Filters. - R. R. Read, J. L. Shanks, S. Treitel: Two-Dimensional Recursive Filter­ing. - B. R.Frieden: Image Enhan~ement a.nd Restoration. - F. C. Billingsley: NOIse ConsId­erations in Digital Image Processing Hard­ware. - T. S. Huang: Recent Advances in Picture Processing and Digital Filtering. -Subject Index.

Two-Dimensional Digital Signal Processing I Linear Filters

Editor: T.S.Huang 1981. 77 figures. X, 210 pages (Topics in Applied Physics, Volume 42) ISBN 3-540-10348-1

Contents: T. S. Huang: Introduction. -R. M. Mersereau: Two-Dimensional Nonrecur­sive Filter Design. - P. A. Ramamoorthy, L. T. Bruton: Design of Two-Dimensional Recursive Filters. - B. T. O'Connor, T. S. Huang: Stability of General Two-Dimen­sional Recursive Filters. - J. W. Woods: Two­Dimensional Kalman Filtering.

Two-Dimensional Digital Signal Processing II Transforms and Median Filters

Editor: T.S.Huang 1981. 49 figures. X, 222 pages. (Topics in Applied Physics, Volume 43) ISBN 3-540-10359-7

Contents: T. S. Huang: Introduction. -J.-O.Eklundh: Efficient Matrix Transposition. - H. J. Nussbaumer: Two-Dimensional Convo­lution and DFT Computation. - S. Zohar: Winograd's Discrete Fourier Transform Algo­rithm. - B./. Justusson: Median Filtering: Statistical Properties. - S. G. Tyan: Median Filtering: Deterministic Properties.

Pattern Formation by Dynamic Systems and Pattern Recognition Proceedings of the International Symposium on Synergetics at SchloB Elmau, Bavaria, April 30-May 5, 1979

Editor: H.Haken

1979. 156 figures, 16 tables. VIII, 305 pages (Springer Series in Synergetics, Volume 5) ISBN 3-540-09770-8

Contents: Introduction. - Temporal Patterns: Laser Oscillations and Other Quantum­Optical Effects. - Pattern Formation in Fluids. - Turbulence and Chaos. - Pattern Recogni­tion and Pattern Formation in Biology. -Pattern Recognition and Associations. -Pattern Formation in Ecology, Sociology and History. - General Approaches. - Index of Contributors.

Springer-Verlag Berlin Heidelberg New York Tokyo