algebraic conditioning analysis of the incremental unknowns preconditioner
TRANSCRIPT
![Page 1: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/1.jpg)
Algebraic conditioning analysis of the incremental unknownspreconditioner
Salvador Garcia 1
Instituto de Matem�aticas y F�õsica, Universidad de Talca, Casilla 721, Talca, Chile
Received 1 March 1997; received in revised form 1 March 1998; accepted 1 April 1998
Abstract
Incremental unknowns are e�cient in the numerical solution of elliptic linear di�erential equations but no rigorous
theoretical justi®cation was available. Hereafter, we establish that the condition number of the incremental unknowns
matrix associated to the Laplace operator is O�1=h20�O�� log h�2� where h0 is the mesh size of the coarsest grid and where
h is the mesh size of the ®nest grid. Furthermore, if block diagonal scaling is used then the condition number of the
preconditioned incremental unknowns matrix associated to the Laplace operator comes out to be O�� log h�2�; last,
we observe that block diagonal scaling by the Laplace operator (scaled by h20) on the coarsest grid and by 4I on the
®ne grids appears as an acceptable alternative. Ó 1998 Elsevier Science Inc. All rights reserved.
Keywords: Linear algebra; Finite di�erences; Incremental unknowns/hierarchical basis; Laplace operator; Poisson
equation
1. Introduction
Incremental unknowns arise [1] in the dynamical system theory where the objective is to studythe long-term dynamic behavior of the solutions of dissipative evolutionary equations when ®nite-di�erence approximations in a variational framework [2,3] of such equations are used and whenseveral levels of discretization are considered; to pursue that goal, higher-order incremental un-knowns were introduced to study the long-term dynamic behavior of the Kuramoto-Sivashinskyequation [4] and of the incompressible Navier±Stokes equations [5]. On the whole, a general el-liptic linear di�erential equation has to be solved at each time step; incremental unknowns are ef-®cient in the numerical solution of such equations [11] but no rigorous theoretical justi®cation wasavailable. Hereafter, using graph techniques, we prove that the condition number of the incremen-tal unknowns matrix associated to the Laplace operator is O�1=h2
0�O�� log h�2� where h0 is themesh size of the coarsest grid and where h is the mesh size of the ®nest grid; furthermore, if blockdiagonal scaling is used then the condition number of the preconditioned incremental unknownsmatrix associated to the Laplace operator comes out to be O�� log h�2�. Besides, we observe thatblock diagonal scaling by the Laplace operator (scaled by h2
0) on the coarsest grid and by 4I on the®ne grids appear as an acceptable alternative.
Applied Mathematical Modelling 22 (1998) 351±366
1 E-mail: [email protected].
0307-904X/98/$19.00 Ó 1998 Elsevier Science Inc. All rights reserved.
PII: S 0 3 0 7 - 9 0 4 X ( 9 8 ) 1 0 0 1 7 - 3
![Page 2: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/2.jpg)
The results are obtained in essence by deriving appropriate bounds on the generalized Rayleighquotient
�v; h2�ÿDh�v��v; �SKÿ1ST�ÿ1v� �
�v; h2�ÿDh�v��v; SÿTKSÿ1v� ;
as stated in [6]; here Dh is the ®nite-di�erence Laplace operator and K is either the identity or ablock diagonal part of the incremental unknowns matrix ST�ÿDh�S, where S stands for the trans-fer matrix from the incremental unknowns x̂ to the nodal unknowns x, i.e., x � Sx̂; the coe�cientsof this incremental unknowns matrix are computed (bounded) using the variational approach in-troduced in [4]. It follows promptly from linear algebra lemmas established in p.197 of Ref. [6]and from the Ger�sgorin theorem that the maximum eigenvalue of the incremental unknowns ma-trix ST�ÿDh�S is bounded by an absolute constant independent of i (the number of levels); then weinfer the result stated before. Hereon we observe that applying the conjugate gradient method tothe linear system STASx̂ � STb is equivalent to applying the preconditioned conjugate gradientmethod to the linear system Ax � b where the preconditioning matrix K for A is K � �SST�ÿ1
;furthermore, we remark that applying the conjugate gradient method to the linear systemSTASx̂ � STb with preconditioning matrix K for STAS is equivalent to applying the precondi-tioned conjugate gradient method to the linear system Ax � b with preconditioning matrix fKfor A where fK � �SKÿ1ST�ÿ1
. The outline of this article is as follows. In Section 2 we presentthe incremental unknowns and describe, using graph techniques, the block-matrix structure ofthe matrices Sÿ1; �SST�ÿ1
; then in Section 3, we study the condition number of the incremental un-knowns matrix by deriving appropriate bounds on the generalized Rayleigh quotient. In Section 4we consider block diagonal (scaling) preconditioning and describe, using graph techniques, theblock-matrix structure of the matrix �SKÿ1ST�ÿ1
; then we study the condition number of the pre-conditioned incremental unknowns matrix by deriving appropriate bounds on the generalizedRayleigh quotient. In Section 4 we report computational experiments and state remarks.
2. Incremental unknowns
Here we consider the domain X � �0; 1� � �0; 1�; the scalar product and norm of the Hilbertspace L2�X� will be denoted throughout this work by ��; �� and by j � j, respectively. First we pres-ent the ®nite-di�erence variational approach used to describe the spatial discretization. Let n be anonnegative integer. For the mesh size h � 1=n, we introduce the ®nite-dimensional vector spaceVh Vh that consists of restrictions to the plane segment �0; 1� � �0; 1� of step functions that areconstants on the plane segments �kh; �k � 1�h� � �lh; �l� 1�h� for k; l � 1; . . . ; nÿ 1. The spaceVh Vh is spanned by the basis functions xk xl; k; l � 1; . . . ; nÿ 1, which are equal to 1 onthe plane segment �kh; �k � 1�h� � �lh; �l� 1�h� and vanish outside this plane segment; this basisof Vh Vh will be called the nodal basis. We write a generic vector wh 2 Vh Vh as
wh �Xnÿ1
k;l�1
wk;l xk xl: �1�
Moreover we introduce the ®nite-di�erence operators rih
rihvh�x� � 1
h�vh�x� hei� ÿ vh�x��; �2�
for i � 1; 2, where e1 � �1; 0�; e2 � �0; 1� is the canonical basis of R2.
352 S. Garcia / Appl. Math. Modelling 22 (1998) 351±366
![Page 3: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/3.jpg)
The ®nite-di�erence Laplace operator reads
Dh � 1
h2
1
1 ÿ4 1
1
264375 �star�; �3�
his associated ®nite-di�erence bilinear form reads
�uh; vh�� �h � ÿX2
i�1
�rihuh;rihvh�: �4�
Last we introduce the ®nite-di�erence operator
Hh � 1
h2
1 1
ÿ4
1 1
264375 �star�: �5�
Now we present the multigridlike framework used to introduce (second-order) incremental un-knowns. Herein we consider n � 2`N where ` � iÿ 1 and where i;N are nonnegative integers,i;N P 2, remaining ®xed. For j from ` down to 0, we introduce the jth-level uniform grid Xj cor-responding to the mesh size hj � 2`ÿjh in both directions; therefore, we obtain the nested sequenceof grids
X` � X`ÿ1 � � � � � X1 � X0: �6�Next we propose a hierarchical ordering of the nodal values of wh at the nodes of the ®nest grid X`.· Nodal values of wh at the nodes of the ®ne grid Xj that do not belong to the coarse grid Xjÿ1 for
j from ` down to 1.· Nodal values of wh at the nodes of the coarsest grid X0.Then we introduce the incremental unknowns recursively from the ®nest level up to the coarsestlevel (the coarse level being successively excluded). First at the nodes of the ®ne grid Xj thatdo not belong to the coarse grid Xjÿ1 the jth-level incremental unknowns consist of the incrementof the nodal values of wh to the average of the nodal values of wh at the neighboring nodes in thecoarse grid Xjÿ1 for j from ` down to 1. Last at the nodes of the coarsest grid X0 the incrementalunknowns consist of the nodal values of wh (see Fig. 1).
An intrinsic (i.e., invariant under permutations) description of the transfer matrix Sÿ1 from thenodal unknowns to the incremental unknowns is readily done with a picture of the associated di-rected graph of the transfer matrix from the (previous) nodal unknowns to the jth-level incremen-tal unknowns, such as illustrated on Fig. 2 (the left bottom corner of the grid Xj). Further withthe indication that the axial coe�cients are 1
2and the oblique coe�cients are 1
4we have a complete
de®nition of an n� n matrix Dj such that
Sÿ1 � I ÿX̀j�1
Dj: �7�
The associated directed graph to the n� n matrix DTj is displayed in Fig. 3. Now it is immediate to
see graphwise that DTk Dl � 0 for k 6� l, from where
�SST�ÿ1 � SÿTSÿ1 � I ÿX̀j�1
DTj ÿ
X̀j�1
Dj �X̀j�1
DTj Dj: �8�
Besides we observe that DTj Dj � �5=4�Ijÿ1 � Fjÿ1 with an n� n matrix Fjÿ1 easily de®ned graph-
wise, his associated directed graph is readily checked to be as described in Fig. 4; further with
S. Garcia / Appl. Math. Modelling 22 (1998) 351±366 353
![Page 4: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/4.jpg)
the indication that the axial coe�cients are 38, and the oblique coe�cients are 1
16we have a com-
plete de®nition of the matrix Fjÿ1. We point out that
Fj � 1
4h2
j
3
2Dhj �
1
4Hhj
� �� 7
4Ij: �9�
On the other hand, it is also easy to see graphwise that
Fig. 2. Directed graph of the matrix Dj.
Fig. 1. The nested sequence of grids. The coarse grid: squares, the intermediate grid: squares and circles. and the ®nest
grid: squares, circles, and crosses; for N � 6; i � 3.
354 S. Garcia / Appl. Math. Modelling 22 (1998) 351±366
![Page 5: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/5.jpg)
Ij ÿ DTj ÿ Dj � h2
j ÿ 1
2Dhj ÿ
1
4Hhj
� �ÿ 2Ij ÿ Gj
ÿ �; �10�
with an n� n matrix Gj also easily de®ned graphwise; his associated directed graph is displayed inFig. 5; further with the indication that the axial coe�cients are 1
2and the oblique coe�cients are 1
4
we have a complete de®nition of the matrix Gj.
Fig. 3. Directed graph of the matrix DTj .
Fig. 4. Directed graph of the matrix Fjÿ1.
Fig. 5. Directed graph of the matrix Gj.
S. Garcia / Appl. Math. Modelling 22 (1998) 351±366 355
![Page 6: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/6.jpg)
Now adding graphwise we obtain
ÿDTj ÿ Dj � DT
j�1Dj�1 � Ij ÿ DTj ÿ Dj
� �� ÿ Ij � DT
j�1Dj�1
� �� 1
4h2
j ÿ 1
2Dhj ÿ
3
4Hhj
� �� Gj;
and observing that
�SST�ÿ1 � DT1 D1 �
X̀ÿ1
j�1
ÿ DTj ÿ Dj � DT
j�1Dj�1
� �� �I ÿ DT
` ÿ D`�;
we can write
�SST�ÿ1 �A0 � 1=8X̀ÿ1
j�1
Aj � 1=2A` ÿ G; �11�
where
A0 � DT1 D1;
Aj � h2j ÿ Dhj ÿ
3
2Hhj
� �; for j � 1; . . . ; `ÿ 1;
A` � h2` ÿ Dh` ÿ
1
2Hh`
� �;
G � 2I ÿX̀j�1
Gj � 2I0 �X̀j�1
2 Ij ÿ Ijÿ1
ÿ �ÿ Gj
� :
It follows promptly from the Ger�sgorin theorem that the matrix 2 Ij ÿ Ijÿ1
ÿ �ÿ Gj is positive semi-de®nite so that the matrix G is positive semide®nite; moreover, by computing its eigenvalues, weobserve that the matrix 2I0 ÿA0 is not positive semide®nite so that the matrix A0 cannot go intothe matrix G while keeping the matrix G positive semide®nite.
Furthermore, we introduce the notation
Bj � h2j ÿ Dhj
ÿ �for j � 0; . . . ; `:
3. Condition number of the incremental unknowns matrix
3.1. Lower bound
The symmetric matrix ÿHhj is positive de®nite because it is irreducibly diagonally dominantand has positive diagonal entries. On the other hand, the matrix 2�ÿDhj� �Hhj is positive de®nite,so that
�v;ÿHhjv�6 2�v;ÿDhjv�;then we can state the following result.
Lemma 3.1. For any v 2 Vhj Vhj ; v 6� 0, and for any l 2 R; lP 0, the operators Dhj and Hhj satisfy
16�v; �ÿDhj ÿ lHhj�v��v;ÿDhjv�
6 1� 2l: �12�
356 S. Garcia / Appl. Math. Modelling 22 (1998) 351±366
![Page 7: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/7.jpg)
Hereafter we use the following inequality stated in [6]
�v;Bjv�6 c�`ÿ j��v;B`v�; �13�where c is an absolute constant independent of i (the number of levels).
Now we observe that
�v; �SST�ÿ1v��v;B`v� 6 �v;A0v�
�v;B`v� �1
8
X̀ÿ1
j�1
�v;Ajv��v;B`v� �
1
2
�v;A`v��v;B`v� ;
so that
maxv 6�0
�v; �SST�ÿ1v��v;B`v� 6max
v 6�0
�v;A0v��v;B0v� max
v 6�0
�v;B0v��v;B`v�
� 1
8
X̀ÿ1
j�1
maxv 6�0
�v;Ajv��v;Bjv� max
v 6�0
�v;Bjv��v;B`v�
� �� 1
2max
v 6�0
�v;A`v��v;B`v�
that is
maxv 6�0
�v; �SST�ÿ1v��v;B`v� 6max
v 6�0
�v;A0v��v; h2
0�ÿDh0�v� c`�
1
2cX̀ÿ1
j�1
�`ÿ j� � 1:
From where we obtain
maxv 6�0
�v; �SST�ÿ1v��v;B`v� 6Ci2 � O�i2�; �14�
where
C � c2
max maxv 6�0
�v;A0v��v; h2
0�ÿDh0�v� ;
1
2
� �� 1:
Now since the eigenvalues of A0 � DT1 D1 are
cos2�kph0=2� � cos2�lph0=2� � cos2�kph0=2�cos2�lph0=2�; k; l � 1; . . . ;N ÿ 1; �15�and since the eigenvalues of h2
0�ÿDh0� are
4 sin2�kph0=2� � sin2�lph0=2�ÿ �; k; l � 1; . . . ;N ÿ 1;
with the same set of orthonormal eigenvectors, as the size of the coarsest grid increases such abound becomes worse [7]
maxv 6�0
�v; �SST�ÿ1v��v;B`v� 6O
1
h20
� �O�i2�; �16�
on the other hand, if the coarsest grid is reduced to one point (or is small enough) then such abound becomes
maxv 6�0
�v; �SST�ÿ1v��v;B`v� 6Ci2 � O�i2�; �17�
where
C � 14c� 1:
S. Garcia / Appl. Math. Modelling 22 (1998) 351±366 357
![Page 8: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/8.jpg)
3.2. Upper bound
The basis of Vh Vh that allows to recover the incremental unknowns introduced before is builtup from the one-dimensional hierarchical basis by tensor products and translations [4], it is calledthe hierarchical basis [13]; these basis functions are ®nite-di�erences hat functions. Now since thescalar product on Vh Vh splits into one-dimensional scalar products over tensor-product func-tions, one-dimensional computations provide the coe�cients of the incremental unknowns matrixST�ÿDh�S (cf. [8] for a straightforward computation of the block-matrix structure of this matrix).Indeed let x�k� be a kth-level hat function remaining ®xed; its support is a square with the center ofthe square being the center (projection) of the hat, now we observe that the jth-level hat functionsx�j� with j P k such that �x�k�;x�j��ÿ �
h 6� 0 have support a square with its center either on the bor-der or on the axes of the square support of x�k� as displayed on Fig. 6, there are at most O�2jÿk�such jth-level functions; then we can go to one-dimensional computations. Let u�k�x be a kth-levelhat function with its center (projection) the kth-level node x; we observe that
u�k�x ;u�j�x
ÿ �ÿ �h� 1
h2
2`ÿk
� �for j P k;
u�k�x ;u�j�y
� �� �h� 1
hÿ 1
2`ÿk
� �for j P k;
where y is a kth-level neighbor of x.
u�k�x ;u�j�z
ÿ � � h 2`ÿjC�
for j > k;
where z is a jth-level node inside the support of u�k�x and where
C 2 1
2jÿk;
3
2jÿk; . . . ;
2jÿk ÿ 1
2jÿk
� �:
Last we see that
u�k�x ;u�k�x
ÿ � � h 1� �2`ÿj ÿ 1��2`ÿj�1 ÿ 1�
32`ÿj
� �;
u�k�x ;u�k�y
� �� h
�2`ÿj ÿ 1��2`ÿj � 1�32`ÿj�1
� �:
The computations above allow us to conclude that
x�k�;x�j�ÿ �ÿ �
h6 c
1
2jÿkfor j P k; �18�
Fig. 6. Support of x�k� (light shading) and centre of x�j� (dark line).
358 S. Garcia / Appl. Math. Modelling 22 (1998) 351±366
![Page 9: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/9.jpg)
where c is an absolute constant and where x�k�;x�j� are as shown before. It follows promptly fromlinear algebra lemmas established in p. 197 of Ref. [6] and from the Ger�sgorin theorem that themaximum eigenvalue of the incremental unknowns matrix ST�ÿDh�S is bounded by an absoluteconstant independent of i (the number of levels).
4. Block diagonal (scaling) preconditioning
In view of the results above when the coarsest grid is not reduced to one point (nor is smallenough) we will use left orientation [10] block diagonal (scaling) preconditioning to solve incre-mental unknowns linear systems; the preconditioning matrix for the incremental unknowns ma-trix ST�ÿDh�S will be as follows.
4.1. Preconditioner I
�19�
where L is the coarse-level block diagonal part of the incremental unknowns matrix ST�ÿDh�S;when lexicographical ordering of the unknowns is used, we have [5,8]
L � y` x` y`� �Nÿ1 1
2`ÿ 1 2ÿ 1� �Nÿ1 �
1
2`ÿ 1 2ÿ 1� �Nÿ1 y` x` y`� �Nÿ1;
where
x` � 1
32`�1 � 1
2`
� �; y` � 1
32`ÿ1 ÿ 1
2`�1
� �;
and where we write
a b c� �M �
b c
a b c
a b c
. ..
a b c
a b
26666666664
37777777775M�M
;
then from a straightforward computation we get
L � 1
3
1
4`ÿ 1 2ÿ 1� �Nÿ1 ÿ 1 2ÿ 1� �Nÿ1 �
1
3h2
0 ÿ Dh0ÿHh0
ÿ �:
The associated directed graph G�K� is strongly connected at the coarsest level and at the ®ne lev-els each node communicates only to itself; further with the indication that the coe�cients at thecoarsest level are the coe�cients of the coarsest level matrix L and that the circular coe�cients atthe ®ne levels are 1 we have a complete de®nition of an n� n matrix M such that
K � Mÿ I0� � � I:
S. Garcia / Appl. Math. Modelling 22 (1998) 351±366 359
![Page 10: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/10.jpg)
Now multiplying graphwise we get ®rst
KSÿ1 � Mÿ I0� � � I ÿX̀j�1
Dj;
and then
SÿTKSÿ1 � Mÿ I0� � � I ÿX̀j�1
Dj ÿX̀j�1
DTj �
X̀j�1
DTj Dj:
So we can write again
�SKÿ1ST�ÿ1 � SÿTKSÿ1 �A0 � 1=8X̀ÿ1
j�1
Aj � 1=2A` ÿ G; �20�
with the only di�erence that here we have
A0 �Mÿ I0 � DT1 D1:
Now by computing its eigenvalues (cf. Eq. (15)) we observe that the matrix 3I0 ÿ DT1 D1 is positive
semide®nite so that the matrix ÿI0 � DT1 D1 can go into the matrix G while keeping the matrix G
positive semide®nite; we rede®ne the matrix G accordingly.We get then
maxv 6�0
v; �SKÿ1ST�ÿ1v� �
�v;B`v� 6maxv 6�0
�v;Mv�v; h2
0�ÿDh0�vÿ � c`� 1
2cX̀ÿ1
j�1
�`ÿ j� � 1:
Here we set
N � 1
3
1
4`ÿ 1 2ÿ 1� �Nÿ1 ÿ 1 2ÿ 1� �Nÿ1;
and we observe that
maxv 6�0
�v;Nv�v; h2
0�ÿDh0�vÿ � 6 2
3
1
4`max
k;l�1;...;Nÿ1
2 sin2�kph0=2� sin2�lph0=2�sin2�kph0=2� � sin2�lph0=2�ÿ � 6 2
3
1
4`:
Using the relation (12), we have
maxv 6�0
�v;Mv�v; h2
0�ÿDh0�vÿ � 6 2
3
1
4`� 1;
from where we obtain
maxv 6�0
v; �SKÿ1ST�ÿ1v� �
�v;B`v� 6Ci2; �21�
where
C � 712
c� 1:
360 S. Garcia / Appl. Math. Modelling 22 (1998) 351±366
![Page 11: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/11.jpg)
4.2. Preconditioner II
�22�
where L is the coarse-level block diagonal part, and where Lj; j � 1; . . . ; `, is the jth-level (®ne-level) diagonal part, of the incremental unknowns matrix ST�ÿDh�S; we have [5,8]
Lj � kj�Ij ÿ Ijÿ1�; j � 1; . . . ; `;
where
kj � 4
32� 1
�2`ÿj�2 !
; j � 1; . . . ; `:
The description of the associated directed graph G�K� is exactly as before; furthermore, with theindication that the circular coe�cients at the jth-level are kj we have a complete de®nition of ann� n matrix M such that
K �M�X̀j�1
kj Ij ÿ Ijÿ1
ÿ �:
Now multiplying graphwise we get ®rst
KSÿ1 �M�X̀j�1
kj Ij ÿ Ijÿ1
ÿ �ÿ kjDj
� ;
and then
SÿTKSÿ1 �M�X̀j�1
kj Ij ÿ Ijÿ1
ÿ �ÿ kjDj
� � X̀j�1
kj ÿ DTj � DT
j Dj
� �:
So we can write
SÿTKSÿ1 �M�X̀j�1
kj Ij ÿ DTj ÿ Dj
� ��X̀j�1
kj ÿ Ijÿ1 � DTj Dj
� �:
Using Eq. (10) we obtain
�SKÿ1ST�ÿ1 � SÿTKSÿ1 �A0 � 1
2
X̀j�1
kjAj ÿ G; �23�
where
A0 �M;
Aj � h2j ÿ Dhj ÿ
1
4Hhj
� �for j � 1; . . . ; `;
G �X̀j�1
kj 2Ij ÿ Gj
ÿ �� Ijÿ1 ÿ DTj Dj
n o:
S. Garcia / Appl. Math. Modelling 22 (1998) 351±366 361
![Page 12: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/12.jpg)
Immediately we observe that
�2Ij ÿ Gj� � Ijÿ1 ÿ DTj Dj � f2�Ij ÿ Ijÿ1� ÿ Gjg � f3Ijÿ1 ÿ DT
j Djg; �24�then by computing its eigenvalues (cf. (15)) we observe that the matrix 3Ijÿ1 ÿ DT
j Dj is positivesemide®nite so that the matrix G is positive semide®nite.
We get now
maxv 6�0
v; �SKÿ1ST�ÿ1v� �
�v;B`v� 6maxv 6�0
�v;Mv�v; h2
0�ÿDh0�vÿ � c`� c
X̀ÿ1
j�1
kj�`ÿ j� � k`
from where we obtain
maxv 6�0
v; �SKÿ1ST�ÿ1v� �
�v;B`v� 6Ci2; �25�
where
C � 32c� 1:
4.3. Preconditioner III
�26�
where ®rst L � h20�ÿDh0
�; we call it Laplace operator block diagonal scaling. ThenL � �ÿ1 2ÿ 1�Nÿ1 �ÿ1 2ÿ 1�Nÿ1; we call it tensor operator block diagonal scaling. From theanalysis above, we obtain
maxv 6�0
v; �SKÿ1ST�ÿ1v� �
�v;B`v� 6Ci2; �27�
where
C � 2c� 1:
Last, since kKÿ1ST�ÿDh�Sk26 kKÿ1k2kST�ÿDh�Sk2 the maximum eigenvalue of the incrementalunknowns matrix Kÿ1ST�ÿDh�S is bounded by kKÿ1k2 � an absolute constant independent of i(the number of levels); the block diagonal scaling preconditioners deteriorate the upper bound.The processes above allow us to state now the following result.
Theorem 4.1. The condition number of the incremental unknowns matrix associated to the Laplaceoperator is O�1=h2
0�O�� log h�2� where h0 is the mesh size of the coarsest grid and where h is the meshsize of the ®nest grid; furthermore, if block diagonal scaling is used then the condition number of thepreconditioned incremental unknowns matrix associated to the Laplace operator comes out to beO�� log h�2�.
362 S. Garcia / Appl. Math. Modelling 22 (1998) 351±366
![Page 13: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/13.jpg)
5. Computational experiments and remarks
First we point out that the algebraic conditioning analysis of the incremental unknowns pre-conditioner presented herein provides theoretical justi®cation of numerical results reported before[9]. Next we consider the two-dimensional Poisson equation with Dirichlet boundary conditions
ÿDu � 1 in X � �0; 1� � �0; 1�;u � 0 on C � @X;
�and put into action the methodology above. We display, in three cases, the convergence behaviorof the preconditioned conjugate gradient method [12] when block diagonal scaling is used:1. Block diagonal scaling: Preconditioner II (block-diagonal).2. Laplace operator block diagonal scaling (laplace-diagonal).3. Tensor operator block diagonal scaling (tensor-diagonal).Here we plot the `2-norm of the residuals (log 10) of the incremental unknowns linear systemST�ÿDh�Sx̂ � ST�b�, against the number of iterations; in Fig. 7 we consider two levels and a31� 31 coarsest grid and in Fig. 8 we consider three levels and again a 31� 31 coarsest grid.
Fig. 7. Behavior of the preconditioned conjugate gradient method: 63� 63 grid points, two levels.
S. Garcia / Appl. Math. Modelling 22 (1998) 351±366 363
![Page 14: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/14.jpg)
The block diagonal scaling: Preconditioner II (block-diagonal) is the best; although, the Laplaceoperator block diagonal scaling (laplace-diagonal) appears as an acceptable alternative, whereasthe tensor operator block diagonal scaling (tensor-diagonal) is not suitable; the convergence be-havior deteriorates from case to case because of the e�ect of the constant kKÿ1k2. Last in Fig. 9we display the variation of the interpolation coe�cients when higher-order interpolation is usedto de®ne higher-order incremental unknowns [4]; the coe�cient c0 is near 1
2whereas the other co-
e�cients are small quantities; computational experiments [4] show that the convergence behaviorof the iterative methods is similar whatever the order of the interpolation is. The numerical exper-iments were performed in double precision arithmetic on the Kubota Paci®c Company Titan; Blasand Lapack from Netlib were used to do the numerical codes and Maple V Release 4 was used todo the pictures.
The incremental unknowns method has been applied to (particular) boundary value problemswith variable coe�cients arising from the implicit ®nite-di�erence discretization of the Navier±Stokes equations [5].
Finally we observe that in the analysis before the coarse grid need not be uniform; only there®nements need be dyadic. The three-dimensional case is under investigation and will be ad-dressed in a separate work.
Fig. 8. Behavior of the preconditioned conjugate gradient method: 127� 127 grid points, three levels.
364 S. Garcia / Appl. Math. Modelling 22 (1998) 351±366
![Page 15: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/15.jpg)
Acknowledgements
The author wishes to express his thanks to Professor Roger Temam and to the Laboratoired'Analyse Num�erique et EDP, Universit�e de Paris-Sud (XI), Paris, France, for invaluable andconstant support, and to an anonymous referee for suggesting to him carefully how to rewrite thisanalysis using graph techniques what improved meaningfully the presentation of this paper. Thiswork was supported by the Fondo Nacional de Desarrollo Cient�õ®co y Tecnol�ogico, Chile,through Proyecto Fondecyt 1940965.
References
[1] R. Temam, Inertial manifolds and multigrid methods, SIAM J. Math. Anal. 21 (1) (1990) 154±178.
[2] J. C�ea, Approximation variationnelle des probl�emes aux limites, Ann. Inst. Fourier (Grenoble) 14 (2) (1964) 345±
444.
[3] R. Temam, Navier-Stokes equations. Theory and numerical analysis, Number 2 in Studies in Mathematics and its
Appications, 3rd ed., North-Holland, Amsterdam, 1984.
[4] S. Garcia, Higher-order incremental unknowns, hierarchical basis, and nonlinear dissipative evolutionary
equations, Appl. Numer. Math. 19 (4) (1996) 467±494.
Fig. 9. Behavior of the higher-order interpolation coe�cients.
S. Garcia / Appl. Math. Modelling 22 (1998) 351±366 365
![Page 16: Algebraic conditioning analysis of the incremental unknowns preconditioner](https://reader035.vdocument.in/reader035/viewer/2022081806/5750228c1a28ab877ea55ec9/html5/thumbnails/16.jpg)
[5] S. Garcia, Higher-order incremental unknowns techniques for the numerical solution of the incompressible
Navier-Stokes equations, Technical Report 9516, The Institute for Scienti®c Computing and Applied
Mathematics, Indiana University, 1995, Revised (completed) version submitted to International Journal for
Numerical Methods in Fluids.
[6] H.C. Elman, X. Zhang, Algebraic analysis of the hierarchical basis preconditioner, SIAM J. Matrix Anal. Appl. 16
(1) (1995) 192±206.
[7] W. Hackbush, Elliptic di�erential equations. Theory and numerical treatment, Number 18 in Springer Series in
Computational Mathematics, Springer, Berlin, 1992 (Translated from German).
[8] S. Garcia, The matricial framework for the incremental unknowns method, Numer. Funct. Anal. Optim. 14 (1,2)
(1993) 25±44.
[9] S. Garcia, Numerical study of the incremental unknowns method, Numer. Meth. PDEs 10 (1) (1994) 103±127.
[10] G. Brussino, V. Sonnad, A comparison of direct and preconditioned iterative techniques for sparse, unsymmetric
systems of linear equations, Internat. J. Numer. Meth. Eng. 28 (4) (1989) 801±815.
[11] M. Chen, R. Temam, Incremental unknowns for solving partial di�erential equations, Numer. Math. 59 (3) (1991)
255±271.
[12] P. Concus, G.H. Golub, D.P. O'Leary, A generalized conjugate gradient method for the numerical solution of
elliptic partial di�erential equations, in: J.R. Bunch, D.J. Rose (Eds.), Sparse Matrix Computations, Academic
Press, New York, 1976, pp. 309±332.
[13] H. Yserentant, On the multi-level splitting of ®nite element spaces, Numer. Math. 49 (4) (1986) 379±412.
366 S. Garcia / Appl. Math. Modelling 22 (1998) 351±366