on instability in data assimilation · motivation and introduction data assimilation mathematical...

Post on 07-Oct-2020

3 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

On Instability in Data Assimilation

Roland Potthast1,2,3, Boris Marx3, Alexander Moodey2, Amos Lawless2, Peter Jan

van Leeuwen2

1German Meteorological Service (DWD),2University of Reading, United Kingdom

3University of Gottingen, Germany

Roland Potthast 1/43

Outline

Outline

1 Motivation and Introduction.

2 Variational Data AssimilationTikhonov and 3dVar

4dVar

Spectral Representation

3 Convergence Analysis for Cycled Assimilation 1: Range ArgumentsSetup: Constant System

Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

4 Convergence Analysis 2: Regularization of AssimilationSetup: High-Frequency Damping Systems

Analysis Error Bounds

Roland Potthast 2/43

Outline

Outline

1 Motivation and Introduction.

2 Variational Data AssimilationTikhonov and 3dVar

4dVar

Spectral Representation

3 Convergence Analysis for Cycled Assimilation 1: Range ArgumentsSetup: Constant System

Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

4 Convergence Analysis 2: Regularization of AssimilationSetup: High-Frequency Damping Systems

Analysis Error Bounds

Roland Potthast 2/43

Outline

Outline

1 Motivation and Introduction.

2 Variational Data AssimilationTikhonov and 3dVar

4dVar

Spectral Representation

3 Convergence Analysis for Cycled Assimilation 1: Range ArgumentsSetup: Constant System

Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

4 Convergence Analysis 2: Regularization of AssimilationSetup: High-Frequency Damping Systems

Analysis Error Bounds

Roland Potthast 2/43

Outline

Outline

1 Motivation and Introduction.

2 Variational Data AssimilationTikhonov and 3dVar

4dVar

Spectral Representation

3 Convergence Analysis for Cycled Assimilation 1: Range ArgumentsSetup: Constant System

Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

4 Convergence Analysis 2: Regularization of AssimilationSetup: High-Frequency Damping Systems

Analysis Error Bounds

Roland Potthast 2/43

Motivation and Introduction Data Assimilation

Motivation I: Numerical Weather Forecast ...

Warn and Protect

Plan Travel

Roland Potthast 3/43

Motivation and Introduction Data Assimilation

Motivation II: Numerical Weather Forecast ...

Logistics

Rivers and Environment

Air Control

Roland Potthast 4/43

Motivation and Introduction Data Assimilation

Data Assimilation at the Deutscher Wetterdienst

Roland Potthast 5/43

Motivation and Introduction Data Assimilation

Modelling of the Atmosphere: Geometry

GME/ICON

Resolution 30km

COSMO-EU

Resolution 7km

COSMO-DE

Resolution 2.8km

Roland Potthast 6/43

Motivation and Introduction Data Assimilation

Measurements for State Determination ...

Synop,

TEMP,

Radiosondes,

Buoys,

Airplanes

(AMDAR),

Radar,

Wind Profiler,

Scatterometer,

Radiances,

GPS/GNSS,

Ceilometer,

LidarRoland Potthast 7/43

Motivation and Introduction Data Assimilation

Mathematical Setup for Data Assimilation

State Space:X state space, containing all state

variables in one vector ϕϕ state of the atmosphere

tk time discretization point

ϕk state at time tkMk : X → X model operator at time tk ,

ϕk 7→ ϕk+1 = M(ϕk )

Observation SpaceYk observation space at time tkfk observation vector at time tkHk : X → Yk observation operatorx, y points in physical space

Roland Potthast 8/43

Motivation and Introduction Data Assimilation

Mathematical Setup for Data Assimilation

State Space:X state space, containing all state

variables in one vector ϕϕ state of the atmosphere

tk time discretization point

ϕk state at time tk

Mk : X → X model operator at time tk ,

ϕk 7→ ϕk+1 = M(ϕk )

Observation SpaceYk observation space at time tkfk observation vector at time tkHk : X → Yk observation operatorx, y points in physical space

Roland Potthast 8/43

Motivation and Introduction Data Assimilation

Mathematical Setup for Data Assimilation

State Space:X state space, containing all state

variables in one vector ϕϕ state of the atmosphere

tk time discretization point

ϕk state at time tkMk : X → X model operator at time tk ,

ϕk 7→ ϕk+1 = M(ϕk )

Observation SpaceYk observation space at time tkfk observation vector at time tkHk : X → Yk observation operatorx, y points in physical space

Roland Potthast 8/43

Motivation and Introduction Data Assimilation

Mathematical Setup for Data Assimilation

State Space:X state space, containing all state

variables in one vector ϕϕ state of the atmosphere

tk time discretization point

ϕk state at time tkMk : X → X model operator at time tk ,

ϕk 7→ ϕk+1 = M(ϕk )

Observation SpaceYk observation space at time tkfk observation vector at time tkHk : X → Yk observation operatorx, y points in physical space

Roland Potthast 8/43

Motivation and Introduction Data Assimilation

Data Assimilation Task

Definition (Data Assimilation Task)

Given measurements fk at tk for k = 1, 2, 3, ...determine the states ϕk from the equations

Hϕk = fk , k = 1, 2, 3, ... (1)

taking care of the model dynamics given by Mk .

Usually the measurement space is dynamic,

i.e. changing in every time-step.

In general H is a non-linear operator,

non-injective, ill-posed.

The value fk contains significant data error

with stochastic components and a dynamic

bias.

Roland Potthast 9/43

Motivation and Introduction Data Assimilation

Data Assimilation Task

Definition (Data Assimilation Task)

Given measurements fk at tk for k = 1, 2, 3, ...determine the states ϕk from the equations

Hϕk = fk , k = 1, 2, 3, ... (1)

taking care of the model dynamics given by Mk .

Usually the measurement space is dynamic,

i.e. changing in every time-step.

In general H is a non-linear operator,

non-injective, ill-posed.

The value fk contains significant data error

with stochastic components and a dynamic

bias.

Roland Potthast 9/43

Variational Data Assimilation Tikhonov and 3dVar

Outline

1 Motivation and Introduction.

2 Variational Data AssimilationTikhonov and 3dVar

4dVar

Spectral Representation

3 Convergence Analysis for Cycled Assimilation 1: Range ArgumentsSetup: Constant System

Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

4 Convergence Analysis 2: Regularization of AssimilationSetup: High-Frequency Damping Systems

Analysis Error Bounds

Roland Potthast 10/43

Variational Data Assimilation Tikhonov and 3dVar

Tikhonov Data Assimilation

In every assimilation step k ∈ N we solve the variational minimization problem to find

the minimum of

J(ϕ) := α||ϕ− ϕ(b)k+1||

2+ ||fk+1 − Hϕ

(b)k+1||

2, ϕ ∈ X , (2)

where

ϕ(b)k+1 := Mϕ

(a)k , k = 0, 1, 2, ... (3)

For linear operators the minimum is given by the normal equations, which can be

reformulated into the update formula

ϕ(a)k+1 = ϕ

(b)k+1 + Rα

(fk+1 − Hϕ

(b)k+1

)(4)

with Rα = (αI + H∗H)−1H∗, or in terms of the analysis fields ϕ(a)k

ϕ(a)k+1 = Mϕ

(a)k + Rα

(fk+1 − HMϕ

(a)k

)(5)

for k = 0, 1, 2, ....Roland Potthast 11/43

Variational Data Assimilation Tikhonov and 3dVar

3dVAR - 1

1 Start with ϕ(a)0 and for k = 1, 2, 3, ... do:

2 Calculate first guess

ϕ(b)k = Mk−1ϕ

(a)k−1 (6)

3 Assimilate data fk at time tk calculating ϕ(a)k .

Roland Potthast 12/43

Variational Data Assimilation Tikhonov and 3dVar

3dVAR - 1

1 Start with ϕ(a)0 and for k = 1, 2, 3, ... do:

2 Calculate first guess

ϕ(b)k = Mk−1ϕ

(a)k−1 (6)

3 Assimilate data fk at time tk calculating ϕ(a)k .

Roland Potthast 12/43

Variational Data Assimilation Tikhonov and 3dVar

3dVAR - 1

1 Start with ϕ(a)0 and for k = 1, 2, 3, ... do:

2 Calculate first guess

ϕ(b)k = Mk−1ϕ

(a)k−1 (6)

3 Assimilate data fk at time tk calculating ϕ(a)k .

Roland Potthast 12/43

Variational Data Assimilation Tikhonov and 3dVar

3dVAR - 1

1 Start with ϕ(a)0 and for k = 1, 2, 3, ... do:

2 Calculate first guess

ϕ(b)k = Mk−1ϕ

(a)k−1 (6)

3 Assimilate data fk at time tk calculating ϕ(a)k .

Roland Potthast 12/43

Variational Data Assimilation Tikhonov and 3dVar

3dVAR - 2

Functional at time slice

J(ϕ) = ||ϕ− ϕ(b)||2B−1 + ||f − Hϕ||2R−1 (7)

Update Formula

ϕ(a)k = ϕ

(b)k + (B−1 + H∗R−1H)−1H∗R−1(fk − H(ϕ

(b)k ))

= ϕ(b)k + BH∗(R + HBH∗)−1︸ ︷︷ ︸

Kalman gain matrix

( fk − H(ϕ(b)k )︸ ︷︷ ︸

obs - first guess

). (8)

Roland Potthast 13/43

Variational Data Assimilation Tikhonov and 3dVar

3dVar = Tikhonov Regularization in a weighted space

We study a weighted scalar product⟨ϕ,ψ

⟩:=⟨ϕ, B−1ψ

⟩L2,⟨

f , g⟩

:=⟨

f ,R−1g⟩

L2(9)

with some self-adjoint invertible matrices B and R. The adjoint with respect to the

weighted scalar product is denoted by H′.

Then⟨f ,Hϕ

⟩=⟨

f ,R−1Hϕ⟩

L2=⟨

R−1f ,Hϕ⟩

L2=⟨

H∗R−1f , ϕ⟩

L2

=⟨

H∗R−1f , BB−1ϕ⟩

=⟨

BH∗R−1f , B−1ϕ⟩

L2=⟨

BH∗R−1f , ϕ⟩

=⟨

H′f , ϕ⟩.

This leads to

H′ = BH∗R−1 (10)

and thus

BH∗(R + HBH∗)−1 = BH∗R−1(I + HBH∗R−1)−1

= H′(I + HH′)−1 = (I + H′H)−1H′.

Roland Potthast 14/43

Variational Data Assimilation Tikhonov and 3dVar

3dVar = Tikhonov Regularization in a weighted space

We study a weighted scalar product⟨ϕ,ψ

⟩:=⟨ϕ, B−1ψ

⟩L2,⟨

f , g⟩

:=⟨

f ,R−1g⟩

L2(9)

with some self-adjoint invertible matrices B and R. The adjoint with respect to the

weighted scalar product is denoted by H′. Then⟨f ,Hϕ

⟩=⟨

f ,R−1Hϕ⟩

L2=⟨

R−1f ,Hϕ⟩

L2=⟨

H∗R−1f , ϕ⟩

L2

=⟨

H∗R−1f , BB−1ϕ⟩

=⟨

BH∗R−1f , B−1ϕ⟩

L2=⟨

BH∗R−1f , ϕ⟩

=⟨

H′f , ϕ⟩.

This leads to

H′ = BH∗R−1 (10)

and thus

BH∗(R + HBH∗)−1 = BH∗R−1(I + HBH∗R−1)−1

= H′(I + HH′)−1 = (I + H′H)−1H′.

Roland Potthast 14/43

Variational Data Assimilation Tikhonov and 3dVar

3dVar = Tikhonov Regularization in a weighted space

We study a weighted scalar product⟨ϕ,ψ

⟩:=⟨ϕ, B−1ψ

⟩L2,⟨

f , g⟩

:=⟨

f ,R−1g⟩

L2(9)

with some self-adjoint invertible matrices B and R. The adjoint with respect to the

weighted scalar product is denoted by H′. Then⟨f ,Hϕ

⟩=⟨

f ,R−1Hϕ⟩

L2=⟨

R−1f ,Hϕ⟩

L2=⟨

H∗R−1f , ϕ⟩

L2

=⟨

H∗R−1f , BB−1ϕ⟩

=⟨

BH∗R−1f , B−1ϕ⟩

L2=⟨

BH∗R−1f , ϕ⟩

=⟨

H′f , ϕ⟩.

This leads to

H′ = BH∗R−1 (10)

and thus

BH∗(R + HBH∗)−1 = BH∗R−1(I + HBH∗R−1)−1

= H′(I + HH′)−1 = (I + H′H)−1H′.

Roland Potthast 14/43

Variational Data Assimilation Tikhonov and 3dVar

3dVar = Tikhonov Regularization in a weighted space

We study a weighted scalar product⟨ϕ,ψ

⟩:=⟨ϕ, B−1ψ

⟩L2,⟨

f , g⟩

:=⟨

f ,R−1g⟩

L2(9)

with some self-adjoint invertible matrices B and R. The adjoint with respect to the

weighted scalar product is denoted by H′. Then⟨f ,Hϕ

⟩=⟨

f ,R−1Hϕ⟩

L2=⟨

R−1f ,Hϕ⟩

L2=⟨

H∗R−1f , ϕ⟩

L2

=⟨

H∗R−1f , BB−1ϕ⟩

=⟨

BH∗R−1f , B−1ϕ⟩

L2=⟨

BH∗R−1f , ϕ⟩

=⟨

H′f , ϕ⟩.

This leads to

H′ = BH∗R−1 (10)

and thus

BH∗(R + HBH∗)−1 = BH∗R−1(I + HBH∗R−1)−1

= H′(I + HH′)−1 = (I + H′H)−1H′.

Roland Potthast 14/43

Variational Data Assimilation 4dVar

Outline

1 Motivation and Introduction.

2 Variational Data AssimilationTikhonov and 3dVar

4dVar

Spectral Representation

3 Convergence Analysis for Cycled Assimilation 1: Range ArgumentsSetup: Constant System

Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

4 Convergence Analysis 2: Regularization of AssimilationSetup: High-Frequency Damping Systems

Analysis Error Bounds

Roland Potthast 15/43

Variational Data Assimilation 4dVar

4dVar - 1 (functional)

Functional on time interval, minimize by a Gradient Method

J(ϕ) = ||ϕ− ϕ(b)||2B−1 +K∑

k=1

||fk − Hϕk ||2R−1 , ϕk = M[t0, tk ]ϕ

where M[s, t] denotes the model operator from time s to time t .

Roland Potthast 16/43

Variational Data Assimilation 4dVar

4dVar - 2 (cycling)

Functional

J(ϕ) = ||ϕ− ϕ(b)` ||

2

B−1 +K∑

k=1

||fk − Hϕ`+k ||2R−1 , ϕ`+k = M[t`, t`+k ]ϕ

for ` = 0, K , 2K , 3K , ...

Minimizer gives the analysis ϕ(a)` , next background is

ϕ(b)`+K = M[t`, t`+K ]ϕ

(a)` (11)

Roland Potthast 17/43

Variational Data Assimilation 4dVar

4dVar - 3 (with linear M rewrite as Tikhonov)

Assume that M is linear. Then by

H`ϕ :=

HM[t`, t`+1]ϕHM[t`, t`+2]ϕ

...

HM[t`, t`+K ]ϕ

f` :=

f`+1

f`+2...

f`+K

(12)

we can rewrite 4dVar as a three-dimensional assimilation algorithm solving

Hϕ = f`. (13)

The regularized solution provides ϕ(a)` , from which ϕ

(b)`+K is calculated by an application

of M[t`, t`+K ].

Roland Potthast 18/43

Variational Data Assimilation Spectral Representation

Outline

1 Motivation and Introduction.

2 Variational Data AssimilationTikhonov and 3dVar

4dVar

Spectral Representation

3 Convergence Analysis for Cycled Assimilation 1: Range ArgumentsSetup: Constant System

Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

4 Convergence Analysis 2: Regularization of AssimilationSetup: High-Frequency Damping Systems

Analysis Error Bounds

Roland Potthast 19/43

Variational Data Assimilation Spectral Representation

Spectral Representation 1

Assume that H : X → Y is linear and compact. Then, there is a singular system{(ψn, gn, µn), n ∈ N} such that Hψn = µngn, H′gn = µnψn and

H′Hψn = µ2nψn, (14)

This yields

(αI + H′H)ψn = (α + µ2n)ψn, n ∈ N. (15)

With measurements

f =∞∑

n=1

fngn ∈ Y (16)

and denote the spectral coefficients of the Tikhonov solution ϕ by γn. 3dVar or Tikhonov

regularization, respectively, (αI + H′H)ϕ = H′f is equivalent to the spectral dampingscheme

γn =µn

α + µ2n

fn, n ∈ N. (17)

Roland Potthast 20/43

Variational Data Assimilation Spectral Representation

Spectral Representation 1

Assume that H : X → Y is linear and compact. Then, there is a singular system{(ψn, gn, µn), n ∈ N} such that Hψn = µngn, H′gn = µnψn and

H′Hψn = µ2nψn, (14)

This yields

(αI + H′H)ψn = (α + µ2n)ψn, n ∈ N. (15)

With measurements

f =∞∑

n=1

fngn ∈ Y (16)

and denote the spectral coefficients of the Tikhonov solution ϕ by γn. 3dVar or Tikhonov

regularization, respectively, (αI + H′H)ϕ = H′f is equivalent to the spectral dampingscheme

γn =µn

α + µ2n

fn, n ∈ N. (17)

Roland Potthast 20/43

Variational Data Assimilation Spectral Representation

Spectral Representation 1

Assume that H : X → Y is linear and compact. Then, there is a singular system{(ψn, gn, µn), n ∈ N} such that Hψn = µngn, H′gn = µnψn and

H′Hψn = µ2nψn, (14)

This yields

(αI + H′H)ψn = (α + µ2n)ψn, n ∈ N. (15)

With measurements

f =∞∑

n=1

fngn ∈ Y (16)

and denote the spectral coefficients of the Tikhonov solution ϕ by γn. 3dVar or Tikhonov

regularization, respectively, (αI + H′H)ϕ = H′f is equivalent to the spectral dampingscheme

γn =µn

α + µ2n

fn, n ∈ N. (17)

Roland Potthast 20/43

Variational Data Assimilation Spectral Representation

Spectral Representation 2

The true Inverse is

γ truen =

1

µnf truen . (18)

This inversion is unstable, if µn → 0, n→∞!

Tikhonov regularization is stable for α > 0

γn =µn

α + µ2n

fn, n ∈ N. (19)

Tikhonov shifts the eigenvalues of H′H by α.

Roland Potthast 21/43

Variational Data Assimilation Spectral Representation

Spectral Representation 2

The true Inverse is

γ truen =

1

µnf truen . (18)

This inversion is unstable, if µn → 0, n→∞!

Tikhonov regularization is stable for α > 0

γn =µn

α + µ2n

fn, n ∈ N. (19)

Tikhonov shifts the eigenvalues of H′H by α.

Roland Potthast 21/43

Variational Data Assimilation Spectral Representation

Spectral Representation 2

The true Inverse is

γ truen =

1

µnf truen . (18)

This inversion is unstable, if µn → 0, n→∞!

Tikhonov regularization is stable for α > 0

γn =µn

α + µ2n

fn, n ∈ N. (19)

Tikhonov shifts the eigenvalues of H′H by α.

Roland Potthast 21/43

Convergence Analysis for Cycled Assimilation 1: Range Arguments Setup: Constant System

Outline

1 Motivation and Introduction.

2 Variational Data AssimilationTikhonov and 3dVar

4dVar

Spectral Representation

3 Convergence Analysis for Cycled Assimilation 1: Range ArgumentsSetup: Constant System

Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

4 Convergence Analysis 2: Regularization of AssimilationSetup: High-Frequency Damping Systems

Analysis Error Bounds

Roland Potthast 22/43

Convergence Analysis for Cycled Assimilation 1: Range Arguments Setup: Constant System

A system with constant dynamics

As a simple model system for study we use constant dynamics M = Identity , i.e

ϕ(b)k+1 = ϕ

(a)k , k = 1, 2, 3, ... (20)

for 3dVar. Also, we employ identical measurements fk ≡ f , k ∈ N.

Then, 3dVar is given by the iteration

ϕk = ϕk−1 + (αI + H′H)−1H′(f − Hϕk−1), k = 1, 2, 3, ... (21)

(This coincides with work of Engl on ’iterated Tikhonov regularization’!)

For the spectral coefficients γn,k of ϕk this leads to the iteration

γn,k = γn,k−1 +µn

α + µ2n

(fn,k − µnγn,k−1) (22)

Roland Potthast 23/43

Convergence Analysis for Cycled Assimilation 1: Range Arguments Setup: Constant System

A system with constant dynamics

As a simple model system for study we use constant dynamics M = Identity , i.e

ϕ(b)k+1 = ϕ

(a)k , k = 1, 2, 3, ... (20)

for 3dVar. Also, we employ identical measurements fk ≡ f , k ∈ N.

Then, 3dVar is given by the iteration

ϕk = ϕk−1 + (αI + H′H)−1H′(f − Hϕk−1), k = 1, 2, 3, ... (21)

(This coincides with work of Engl on ’iterated Tikhonov regularization’!)

For the spectral coefficients γn,k of ϕk this leads to the iteration

γn,k = γn,k−1 +µn

α + µ2n

(fn,k − µnγn,k−1) (22)

Roland Potthast 23/43

Convergence Analysis for Cycled Assimilation 1: Range Arguments Setup: Constant System

A system with constant dynamics

As a simple model system for study we use constant dynamics M = Identity , i.e

ϕ(b)k+1 = ϕ

(a)k , k = 1, 2, 3, ... (20)

for 3dVar. Also, we employ identical measurements fk ≡ f , k ∈ N.

Then, 3dVar is given by the iteration

ϕk = ϕk−1 + (αI + H′H)−1H′(f − Hϕk−1), k = 1, 2, 3, ... (21)

(This coincides with work of Engl on ’iterated Tikhonov regularization’!)

For the spectral coefficients γn,k of ϕk this leads to the iteration

γn,k = γn,k−1 +µn

α + µ2n

(fn,k − µnγn,k−1) (22)

Roland Potthast 23/43

Convergence Analysis for Cycled Assimilation 1: Range Arguments Setup: Constant System

Spectral Formula I

We employ

f = Hϕ(true) + δ, fn,k = µnγ(true)n + δn. (23)

and obtain

γn,k = γn,k−1 +µ2

n

α + µ2n

(γ(true)n − γn,k−1) +

µ2n

α + µ2n

δn

µn

Theorem (Spectral Formula I)

The 3dVar cycling for a constant dynamics with identical measurements

f = Hϕ(true) + δ lead to the spectral update formula

γn,k = (1− qn)γ(true)n + qnγn,k−1 +

(1− qn)

µnδn (24)

using

qn =α

α + µ2n

= 1− µ2n

α + µ2n

. (25)

Roland Potthast 24/43

Convergence Analysis for Cycled Assimilation 1: Range Arguments Setup: Constant System

Spectral Formula II

Theorem (Spectral Formula II)

The 3dVar cycling for a constant dynamics with identical measurements

f = Hϕ(true) + δ can be carried out explicitly. The development of its spectral

coefficients is given by

γn,k = (1− qkn )γ(true)

n + qknγn,0 +

(1− qkn )

µnδn (26)

using

qn =α

α + µ2n

= 1− µ2n

α + µ2n

. (27)

Proof. Induction over k. �

Roland Potthast 25/43

Convergence Analysis for Cycled Assimilation 1: Range Arguments Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

Outline

1 Motivation and Introduction.

2 Variational Data AssimilationTikhonov and 3dVar

4dVar

Spectral Representation

3 Convergence Analysis for Cycled Assimilation 1: Range ArgumentsSetup: Constant System

Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

4 Convergence Analysis 2: Regularization of AssimilationSetup: High-Frequency Damping Systems

Analysis Error Bounds

Roland Potthast 26/43

Convergence Analysis for Cycled Assimilation 1: Range Arguments Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

Convergence for f ∈ R(H)

Theorem (Convergence for f ∈ R(H))

Cycled 3dVar for a constant dynamics and identical measurements f (true) + δ ∈ R(H)tends to the true solution ϕ(true) + σ with Hσ = δ for k →∞.

Proof. We study

γn,k = (1− qkn )γ(true)

n + qknγn,0 +

(1− qkn )

µnδn (28)

for k →∞.

Since 0 < qn < 1, we have

qkn → 0, k →∞, (1− qk

n )→ 1, k →∞. (29)

Since δ = Hσ the element σ with spectral coefficients δn/µn is in X and cycled 3dVar

converges towards ϕ(true) + σ. �

Roland Potthast 27/43

Convergence Analysis for Cycled Assimilation 1: Range Arguments Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

Convergence for f ∈ R(H)

Theorem (Convergence for f ∈ R(H))

Cycled 3dVar for a constant dynamics and identical measurements f (true) + δ ∈ R(H)tends to the true solution ϕ(true) + σ with Hσ = δ for k →∞.

Proof. We study

γn,k = (1− qkn )γ(true)

n + qknγn,0 +

(1− qkn )

µnδn (28)

for k →∞.

Since 0 < qn < 1, we have

qkn → 0, k →∞, (1− qk

n )→ 1, k →∞. (29)

Since δ = Hσ the element σ with spectral coefficients δn/µn is in X and cycled 3dVar

converges towards ϕ(true) + σ. �

Roland Potthast 27/43

Convergence Analysis for Cycled Assimilation 1: Range Arguments Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

Convergence for f ∈ R(H)

Theorem (Convergence for f ∈ R(H))

Cycled 3dVar for a constant dynamics and identical measurements f (true) + δ ∈ R(H)tends to the true solution ϕ(true) + σ with Hσ = δ for k →∞.

Proof. We study

γn,k = (1− qkn )γ(true)

n + qknγn,0 +

(1− qkn )

µnδn (28)

for k →∞.

Since 0 < qn < 1, we have

qkn → 0, k →∞, (1− qk

n )→ 1, k →∞. (29)

Since δ = Hσ the element σ with spectral coefficients δn/µn is in X and cycled 3dVar

converges towards ϕ(true) + σ. �

Roland Potthast 27/43

Convergence Analysis for Cycled Assimilation 1: Range Arguments Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

Divergence for f 6∈ R(H)

Theorem (Divergence for f 6∈ R(H))

For a constant dynamics and identical measurements f (true) + δ 6∈ R(H) cycled 3dVardiverges for k →∞.

Proof. We study

γn,k = (1− qkn )γ(true)

n + qknγn,0 +

(1− qkn )

µnδn (30)

for k →∞. Let σk ∈ X denote the element with spectral coefficients

σn,k =(1− qk

n )

µnδn, k, n ∈ N.

which is well defined since for every fixed k ∈ N∣∣∣(1− qkn )

µn

∣∣∣ =∣∣∣(α + µ2

n)k − αk

(α + µ2n)kµn

∣∣∣ (31)

is bounded uniformly for n ∈ N.

Roland Potthast 28/43

Convergence Analysis for Cycled Assimilation 1: Range Arguments Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

Divergence for f 6∈ R(H)

Theorem (Divergence for f 6∈ R(H))

For a constant dynamics and identical measurements f (true) + δ 6∈ R(H) cycled 3dVardiverges for k →∞.

Proof. We study

γn,k = (1− qkn )γ(true)

n + qknγn,0 +

(1− qkn )

µnδn (30)

for k →∞. Let σk ∈ X denote the element with spectral coefficients

σn,k =(1− qk

n )

µnδn, k, n ∈ N.

which is well defined since for every fixed k ∈ N∣∣∣(1− qkn )

µn

∣∣∣ =∣∣∣(α + µ2

n)k − αk

(α + µ2n)kµn

∣∣∣ (31)

is bounded uniformly for n ∈ N.Roland Potthast 28/43

Convergence Analysis for Cycled Assimilation 1: Range Arguments Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

Divergence for f 6∈ R(H)

Since δ 6∈ R(H) we know that

SL :=L∑

n=1

∣∣∣ δn

µn

∣∣∣2 →∞, L→∞. (32)

Given C > 0 we can choose L such that SL > 2C. Then

||σk ||2 ≥L∑

n=1

∣∣∣(1− qkn )δn

µn

∣∣∣2 > C (33)

for k ∈ N sufficiently large, which proves

||σk || → ∞, k →∞ (34)

and the proof is complete. �

Roland Potthast 29/43

Convergence Analysis for Cycled Assimilation 1: Range Arguments Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

Numerical Example: Dynamic Magnetic Tomography

100

101

102

103

2.16

2.18

2.2

2.22

2.24

2.26

2.28

2.3

2.32

2.34

2.36x 10

−3

time index k

erro

r

3dVar

Roland Potthast 30/43

Convergence Analysis 2: Regularization of Assimilation Setup: High-Frequency Damping Systems

Outline

1 Motivation and Introduction.

2 Variational Data AssimilationTikhonov and 3dVar

4dVar

Spectral Representation

3 Convergence Analysis for Cycled Assimilation 1: Range ArgumentsSetup: Constant System

Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

4 Convergence Analysis 2: Regularization of AssimilationSetup: High-Frequency Damping Systems

Analysis Error Bounds

Roland Potthast 31/43

Convergence Analysis 2: Regularization of Assimilation Setup: High-Frequency Damping Systems

ϕ(a)k+1 − ϕ

(true)k+1 = M(ϕ

(a)k − ϕ

(true)k ) + RαHM

(ϕ(true)k − ϕ(a)

k

)+ Rαf

(δ)k+1

= (I − RαH)M(ϕ(a)k − ϕ

(true)k

)+ Rαf

(δ)k+1. (35)

We abbreviate the analysis error by

ek := ϕ(a)k − ϕ

(true)k , k = 1, 2, ... (36)

and define

Λ := (I − RαH)M (37)

to obtain the iteration formula

ek+1 = Λek + Rαf(δ)k+1, k = 0, 1, 2, ... (38)

Roland Potthast 32/43

Convergence Analysis 2: Regularization of Assimilation Setup: High-Frequency Damping Systems

System Evolution

Theorem

Assume that the error f(δ)k does not depend on k, i.e. that we feed some constant error

into the data assimilation scheme. Then, the error terms ek with initial error e0 described

by the update formula (38) evolve according to

ek = Λk e0 +( k−1∑`=0

Λ`)

e(δ) (39)

with e(δ) := Rαf (δ). If (I − Λ)−1 exists, it can be written as

ek = Λk e0 + (I − Λ)−1(I − Λk )e(δ) (40)

The update formula has been already known in the 1970s.

It was derived for a finite dimensional system with some well-posed observation

operator in a stochastic framework for the Kalman filter.

Roland Potthast 33/43

Convergence Analysis 2: Regularization of Assimilation Setup: High-Frequency Damping Systems

Frobenius Norm

Let {ψ` : ` ∈ N} be a complete orthonormal system in X . Then, any vector ϕ ∈ X

can be decomposed into its Fourier sum

ϕ =∞∑`=1

α`ψ` (41)

with α` := 〈ϕ,ψ`〉 for ` ∈ N. We apply this to Mϕ to derive

Mϕ =∞∑`=1

〈Mϕ,ψ`〉ψ` =∞∑`=1

∞∑j=1

αj〈Mϕj , ψ`〉ψ` =∞∑`=1

∞∑j=1

ψ`M`,jαj .

with the infinite matrix M`,j = 〈Mψj , ψ`〉. The Frobenius norm of M with respect to the

orthonormal system {ψ` : ` ∈ N} is defined as

||M||Fro :=∞∑`=1

∞∑j=1

|M`,j |2. (42)

Roland Potthast 34/43

Convergence Analysis 2: Regularization of Assimilation Setup: High-Frequency Damping Systems

Frobenius Systems: Damping of High Frequencies

Lemma

If M is self-adjoint with respect to the scalar product 〈·, ·〉, then its Frobenius norm (42)

is independent of the orthonormal system {ψ` : ` ∈ N}.

The identity operator I does not have a bounded Frobenius norm.

Systems with bounded Frobenius norm are damping on higher modes or

frequencies.

Roland Potthast 35/43

Convergence Analysis 2: Regularization of Assimilation Analysis Error Bounds

Outline

1 Motivation and Introduction.

2 Variational Data AssimilationTikhonov and 3dVar

4dVar

Spectral Representation

3 Convergence Analysis for Cycled Assimilation 1: Range ArgumentsSetup: Constant System

Convergence for Data f ∈ R(H) / Divergence for f /∈ R(H)

4 Convergence Analysis 2: Regularization of AssimilationSetup: High-Frequency Damping Systems

Analysis Error Bounds

Roland Potthast 36/43

Convergence Analysis 2: Regularization of Assimilation Analysis Error Bounds

Space Decomposition

Let the orthonormal system {ψ` : ` ∈ N} in X be given by the singular system of the

observation operator H : X → Y . In this case we define an orthogonaldecomposition of the space X by

X(n)1 := span{ψ1, ..., ψn}︸ ︷︷ ︸

lowermodes

, X(n)2 :=

highermodes︷ ︸︸ ︷span{ψn+1, ψn+2, ...} . (43)

Using the orthogonal projection operators P1 of X onto X1 and P2 of X onto X2, we

decompose M into

M1 := P1M, M2 := P2M. (44)

Using N = (I − RαH) and Λ = NM we obtain

Λ = N|X1 M1︸ ︷︷ ︸lower modes

+

higher modes︷ ︸︸ ︷N|X2 M2 . (45)

Roland Potthast 37/43

Convergence Analysis 2: Regularization of Assimilation Analysis Error Bounds

Norm estimates for N

The operator N maps Xj , j = 1, 2, into itself. This leads to the norm estimate

||Λϕ||2 = ||(N|X1 M1 + N|X2 M2)ϕ||2

= ||N|X1 M1ϕ||2 + ||N|X2 M2ϕ||2, (46)

leading to

||Λ||2 ≤ ||N|X1 ||2||M1||2 + ||N|X2 ||

2||M2||2. (47)

We derive estimates for all the above terms.

Roland Potthast 38/43

Convergence Analysis 2: Regularization of Assimilation Analysis Error Bounds

Norm Estimates 1

||Λ||2 ≤ ||N|X1 ||2||M1||2 + ||N|X2 ||

2||M2||2. (48)

Lemma

The norm of the operator N|X2 is given by

||N|X2 || = 1 (49)

for all n ∈ N and α > 0.

Lemma

Assume that M is self-adjoint with bounded Frobenius norm. Then, given ρ > 0 there is

n ∈ N such that for M2 = M(n)2 we have

||M2|| < ρ. (50)

Roland Potthast 39/43

Convergence Analysis 2: Regularization of Assimilation Analysis Error Bounds

Norm Estimates 2

||Λ||2 ≤ ||N|X1 ||2||M1||2 + ||N|X2 ||

2||M2||2. (51)

We need the freedom

||M1|| = c. (52)

Lemma

On X1 for N = I − RαH we have the norm estimate

||N|X1 || = sup`=1,..,n

| α

α + µ2n

| (53)

where µn are the singular values of the operator H ordered according to their size and

multiplicity. In particular, given ε > 0 and n ∈ N we can choose α > 0 sufficiently

small such that

||N|X1 || < ε. (54)

Roland Potthast 40/43

Convergence Analysis 2: Regularization of Assimilation Analysis Error Bounds

Stabilization of data assimilation

Recall

ek = Λk e0 + (I − Λ)−1(I − Λk )e(δ) (55)

Theorem

Assume that the system M is self-adjoint and its Frobenius norm is bounded and let αdenote the regularization parameter for a cycled data assimilation scheme. Then, for

α > 0 sufficiently small, we have ||Λ|| < 1. Assume that the observation error f (δ) is

bounded in norm by δ > 0. In this case, the analysis error is bounded over time with

lim supk→∞

||ek || ≤||Rα||δ

1− ||Λ||. (56)

Roland Potthast 41/43

Convergence Analysis 2: Regularization of Assimilation Analysis Error Bounds

Numerical Example

Roland Potthast 42/43

Summary Summary

Summary

Summary on DA Instability

3dVar (and 4dVar for linear systems) can be analysed as a cycled Tikhonovregularization.

We have shown that even for very stable dynamics where M = const cycled

Tikhonov with ill-posed observation operators diverges if we have data f 6∈ R(H).

The divergence carries over to 3dVar and 4dVar for simple systems.

For systems damping high frequencies we have shown that stability depends on

the choice of the regularization parameter α

For self-adjoint systems M with bounded Frobenius norm we can always achieve a

stable assimilation by choosing α appropriately, even if ||M||2 > 1.

Roland Potthast 43/43

Summary Summary

Summary

Summary on DA Instability

3dVar (and 4dVar for linear systems) can be analysed as a cycled Tikhonovregularization.

We have shown that even for very stable dynamics where M = const cycled

Tikhonov with ill-posed observation operators diverges if we have data f 6∈ R(H).

The divergence carries over to 3dVar and 4dVar for simple systems.

For systems damping high frequencies we have shown that stability depends on

the choice of the regularization parameter α

For self-adjoint systems M with bounded Frobenius norm we can always achieve a

stable assimilation by choosing α appropriately, even if ||M||2 > 1.

Roland Potthast 43/43

top related