hermitian and skew-hermitian solvers and … · abstract the aim of the master thesis is to apply...

74
JOHANNES KEPLER UNIVERSIT ¨ AT LINZ Netzwerk ur Forschung, Lehre und Praxis Hermitian and skew-Hermitian Solvers and Preconditioners: Application to Symmetric and Indefinite Problems Diplomarbeit zur Erlangung des akademischen Grades Diplomingenieurin in der Studienrichtung Technische Mathematik Angefertigt am Institut f¨ ur Numerische Mathematik Betreuung: O. Univ. Prof. Dipl. Ing. Dr. Ulrich Langer Eingereicht von: Lilya Ghazaryan Linz, August 2008 Johannes Kepler Universit¨ at A-4040 Linz · Altenbergerstraße 69 · Internet: http://www.jku.at · DVR 0093696

Upload: voduong

Post on 13-Sep-2018

226 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

JO H AN N E S KE P L E RU N I V E RS I T A T L I N Z

Ne t z w e r k f u r F o r s c h u n g , L e h r e u n d P r a x i s

Hermitian and skew-Hermitian Solvers andPreconditioners: Application to Symmetric and

Indefinite Problems

Diplomarbeit

zur Erlangung des akademischen Grades

Diplomingenieurin

in der Studienrichtung

Technische Mathematik

Angefertigt am Institut fur Numerische Mathematik

Betreuung:

O.Univ. Prof. Dipl. Ing. Dr. Ulrich Langer

Eingereicht von:

Lilya Ghazaryan

Linz, August 2008

Johannes Kepler UniversitatA-4040 Linz · Altenbergerstraße 69 · Internet: http://www.jku.at · DVR 0093696

Page 2: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

1

For my parents, Levon and Veta

Page 3: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

Abstract

The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS)iterative method and its inexact version to the solution of linear algebraic systems,arising in different applications, with symmetric, but indefinite system matrices.In particular, we consider saddle-point problems coming from a reformulation ofthe well-known domain decomposition Finite Element Tearing and Interconnectingmethod as a saddle-point problem with both primal and dual variables as unknowns.This is an alternative, hopefully better, approach to the existing methods, namelyblock-structured preconditioners combined with suitable Krylov subspace methodsor Schur-complement conjugate gradient methods. The convergence of HSS methodis studied numerically. The numerical experiments show that the use of the HSSmethod as a preconditioner in a Krylov subspace method is very efficient.

2

Page 4: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

Acknowledgement

During my two-year study in Technical University of Eindhoven (TU/e) andJohannes Kepler Universitat Linz (JKU) many people have influenced, directly orindirectly, on writing my master thesis. I consider writing this master thesis as one ofthe most important steps in my life. First of all, this is evaluation of accomplishinganother stage in my education. Secondly, this is a culmination of my two-yearjourney in Europe.

I would like to thank first of all TU/e and JKU for hosting me for two years andgiving me the chance to be a part of their academical life. I am very grateful tomy supervisor Prof. Ulrich Langer for inspiring and encouraging me while workingon the thesis. In general, I want to thank all professors in both universities whoshared their knowledge and experience with me and my colleagues. These twoyears of my study would not be that fruitful without the discussions during thelectures, seminars and group-works. I am very happy that I can see the changein my knowledge and that the hopes I had before starting the master program arefulfilled. I found out for myself that the more I study applied mathematics, themore I like it!

I would not be able to reach this stage of my life without the help and support ofmy family. I am deeply grateful for my parents, Levon and Veta, and sisters, Saraand Tamara, for their believe in me. I am thankful to all my friends that I met inEurope for making me feel like at home and sharing the difficulties and happinesswith me. These two years would not be as interesting as they were without all myfriends.

Writing this thesis is another experience in my education, hopefully not the lastone !

3

Page 5: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

Contents

1 Introduction 6

2 Motivation 10

2.1 Some concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.2 Model problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.3 Finite Element discretization . . . . . . . . . . . . . . . . . . . . . . 15

3 Hermitian and skew-Hermitian splitting methods 18

3.1 HSS iteration method . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.2 HSS as a preconditioner for Krylov subspace methods . . . . . . . . 22

3.3 IHSS iteration method . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.4 HSS for generalized saddle-point problems . . . . . . . . . . . . . . 25

3.5 AHSS splitting iteration for saddle-point problems . . . . . . . . . . 34

4 Finite Element Tearing and Interconnecting Method 41

4.1 About Domain Decomposition Methods . . . . . . . . . . . . . . . . 41

4.2 Original FETI method . . . . . . . . . . . . . . . . . . . . . . . . . 42

5 HSS applied to FETI system 47

5.1 Numerical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4

Page 6: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CONTENTS 5

6 Other saddle-point problems 52

6.1 Mixed Formulations of 2nd order elliptic problems . . . . . . . . . . 52

6.1.1 Linear elliptic problems . . . . . . . . . . . . . . . . . . . . 52

6.1.2 Linear elasticity problem . . . . . . . . . . . . . . . . . . . . 55

6.1.3 Stokes problem . . . . . . . . . . . . . . . . . . . . . . . . . 56

6.2 Boundary Element Method . . . . . . . . . . . . . . . . . . . . . . . 57

6.3 Analysis and numerics for Mixed Variational Problems . . . . . . . 62

6.3.1 Brezzi’s theorem . . . . . . . . . . . . . . . . . . . . . . . . 63

6.3.2 Mixed Finite Element Approximation . . . . . . . . . . . . . 64

7 Conclusion 67

Bibliography 69

List of Figures 72

List of Tables 73

Page 7: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

Chapter 1

Introduction

The main goal of this work is to consider the application of Hermitian and skew-Hermitian (HSS) splitting iteration [2] to the saddle point-system coming fromDomain Decomposition (DD) Finite Element Tearing and Interconnecting (FETI)methods [10]. The choice of this particular domain decomposition method as anapplication is motivated by the fact that FETI and the more recent Dual-PrimalFinite Element Tearing and Interconnecting (FETI-DP), Balancing Domain De-composition by Constraints (BDDC) methods are the most widely used domaindecomposition methods [12].

A large number of phenomena in nature are mathematically described by PartialDifferential Equations (PDEs)[1]. Elliptic PDEs play the main role. The reason forthis is that time integration methods for parabolic and hyperbolic PDEs finally leadto the solution of a sequence of elliptic PDEs. The theory of elliptic PDEs [3] isa well developed area in applied mathematics, and there are fundamental methodswhich are widely used for solving such problems. The machinery for solving linearelliptic PDEs can be roughly described in the following way:

• establishing the well posedness of the problem: namely, the existence, unique-ness and stability of the weak solution,

• discretization of the problem: transition from infinite dimensional space tofinite dimensional space,

• solving the discrete system: usually formulated as a linear system of equations.

6

Page 8: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 1. INTRODUCTION 7

The first step of this description is more or less fixed, as the well posedness ofthe problem relies on the theorem of Lax-Milgram [1], or more generally, on Fred-holm’s theory [33]. The other two steps can be done in several ways. For instance,for discretization of the problem one can use different discretization techniques, likeFinite Element Method (FEM) [9], Finite Difference Method (FDM) [4], Finite Vol-ume Method (FVM) and Boundary Element Method (BEM) [5]. And, of course,nowadays there exist a large number of efficient ways for solving linear systems ofequations [6],[7]. The choice of the method either for discretization or for solvingthe problem strongly depends on the nature of the problem. In order to find themost efficient way for solving a particular problem one has to take into account theproperties of the problem. For example, if the resulting linear system is symmetricpositive definite then a good choice can be the Conjugate Gradient (CG) method[6]. With this regard, certain methods can be quite efficient applied for certain classof problems.

So, the HSS method we are going to consider is a representative of iterativemethods. Iterative methods for numerical computation of the solution of a linearsystem of equations (originally by Gauss in 1823, Liouville in 1837 and Jacobi in1845) have popularity in several areas of scientific computing. The approach of iter-ative methods is quite different from the direct solution methods (such as Gaussianelimination) which until recent times were often preferred to iterative methods be-cause of their predictable behavior. Though initially iterative methods had somehowspecial-purpose nature as they were developed for certain applications, nowadaysiterative methods started to be superior to direct solvers for large scale problems.The most efficient direct solvers available nowadays might not be really efficient forsolving for instance linear systems coming from the discretization of partial differen-tial equation in three dimensional space because of the memory and computationalrequirements. In such cases iterative solvers are widely used as, compare to thedirect methods, they are easier to implement efficiently on high-performance, and,in particular, on parallel computers.

The simplest iterative method (preconditioned Richardson) for solving linearsystem Ax = b has the form:

xk+1 = xk − τC−1(Axk − b) k = 0, 1, 2.....

where x0 is a given initial guess, τ is a suitably chosen relaxation parameter andC is some preconditioner. The classical preconditioners C are I (classical Richard-son iteration) and diagA (Jacobi)[8]. Under certain conditions the sequence xkconverges to the solution of Ax = b [6]. One of the advantages of the iterative

Page 9: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 1. INTRODUCTION 8

methods is that only matrix-vector multiplications and vector operations need tobe performed. Also the computer storage is only required for the nonzero entriesof A and the vector x in addition to one or two more vectors. Therefore, one canuse the advantage of sparsity of A if that is the case. Sometimes fully populatedmatrices A are data sparse (for example circulant, block-circulant matrices) or canbe approximated by data sparse matrices [32]. This allows to reduce the storagerequired for A and the cost of one matrix-vector multiplication from n2 to n(log n)α

for some α > 0, usually α ∈ 1, 2, where n is the size of A. In that case, iterativesolvers are even more efficient. While for direct methods one needs permutations inorder to avoid a considerable amount of fill-in [32].

The disadvantage of the iterative methods is that the rate of convergence maybe slow and a proper stopping criterium needs to be found. Nevertheless, for someimportant classes of matrices, convergence analysis results as well as practical im-plementation aspects are available nowadays, for instance, for Symmetric PositiveDefinite (SPD) problems [12], [9] and saddle-point problems [11]. Certainly, thechoice of iterative method depends on the problem itself. More precisely, one hasto take into account the properties of the problem which leads to the linear sys-tem. Nevertheless, one could combine somehow direct solvers and iterative solversin a way, that the advantages of both iterative and direct methods are used. Inthis regard domain decomposition methods can be seen as a good combination ofthe two solution methods. The class of domain decomposition methods has gainedenormous popularity during the last decade [12]. This methods follow the idea of”divide -and -conquer”, which means that they divide the original problem into anumber of smaller problems. This division is done for different reasons. Sometimessuch a division arises from breaking up a domain with complicated geometry. Inother cases the division is more artificial. The subproblems are easier to solve be-cause of their smaller size and often parallel computation can be used. This is quiteimportant for the efficiency of computations.

Domain decomposition methods can be seen from two different point of view.One is that they may arise from separation of a physical domain into regions (sub-domains). In these regions the problem can be modeled by separate partial differen-tial equations. On the interfaces between the subdomains various conditions, suchas continuity, are imposed. The other approach is to see domain decompositionmethods as methods for solving large algebraic linear systems arising from the dis-cretization of partial differential equations. In that sense, a domain decompositionmethod can be seen as an algebraic method, where the large system is subdividedinto smaller problems, whose solutions can be used to generate a preconditioner forthe large system [13].

Page 10: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 1. INTRODUCTION 9

In proceeding chapters we will look at the HSS iterative method itself, its use forconstructing preconditioners, discuss the domain decomposition method FETI andfinally present the numerical result we have obtained. For the sake of completenessof the material throughout the paper we will repeat some of the proofs of basic re-sults [2], [10], [14], [26], [29]. In more details, the paper is organized as follows. Wewill start with a small Chapter 2 where we will motivate the objective of this work.For this we will consider a model problem, a simple boundary value problem, solvingwhich will suggest the consideration of FETI domain decomposition method andHSS iterative method. Then we will continue with Chapter 3 where we will intro-duce the Hermitian and skew-Hermitian iterative method itself. We will include themain convergence results for this method. Also we will discuss recently introducedAccelerated HSS method [14]. Afterwards in Chapter 4 the actual application of theHSS method in our paper, namely, the Finite Element Tearing and Interconnectingdomain decomposition method will be considered. We will describe the method andgive its saddle-point formulation in dual and primal variables. Later on in Chapter5, after all the required methods are introduced, we will consider the applicationof HSS method to the FETI system for out model problem. The HSS method willbe considered as an stationary iterative method and also as a preconditioner forGMRES, which is a widely used Krylov subspace method [6], [28]. We will presentthe numerical results we have obtained and interpret them. Since saddle-point sys-tems arise in many scientific and engineering applications, including computationalfluid dynamics [15], [16], [24], [17], mixed finite element approximation of ellipticPDE’s [18], [19], [25], optimization [20], [23], [22], [21], in Chapter 6 we will presentsome more examples of problems leading to saddle-point systems. The paper willbe concluded with a short summary and a brief discussion of possible continuation.

Page 11: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

Chapter 2

Motivation

In this small chapter we will try to motivate the topic of our work by considering amodel problem, a Dirichlet boundary value problem on a rectangular domain. Wewill present the main steps for solving such a problem and point out which stepsare connected to our topic.

2.1 Some concepts

Before we introduce our model problem we will consider definitions of several basicconcepts that we will need throughout the chapter.We will denote by L2(Ω) the space of scalar functions which are defined and squareintegrable over Ω in Lebesgue sense, namely

L2(Ω) = u :

Ω

u2dx < ∞.

L2(Ω) is a Hilbert space with inner product (u, v) =∫

Ωuvdx and the induced norm

||u||2L2

(Ω)= (u, u) . In the same manner, for vector functions v = [v1, v2, ..., vd]

T we

define the Hilbert space (L2(Ω))d in the following way:

(L2(Ω))d = v = [v1, v2, ..., vd]T : vi ∈ L2(Ω) for i = 1, ..., d.

The inner product in (L2(Ω))d is defined as (u, v) =∫Ω

u · vdx =∑d

i=1

∫Ω

uividx andthe corresponding norm is ||v||2(

L2(Ω)

)d = (v, v).

We also introduce the notion of multi-index. A multi-index α = (α1, α2, ..., αd) is

10

Page 12: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 2. MOTIVATION 11

a set of non-negative integers. We define |α| =∑d

i=1 αi. Using this, we define thepartial derivative Dα as follows

Dα =∂|α|

∂α1x1∂α1x1 . . . ∂αdxd

.

For fixed m ≥ 0 we define the Sobolev spaces,

Hm(Ω) = v : v ∈ L2(Ω) and Dαv ∈ L2(Ω) for |α| ≤ m,and the associated seminorms and norms are defined as

|v|2Hm(Ω)

=∑

|α|=m

||Dαv||L2(Ω)

and||v||2Hm(Ω)

=∑

k≤m

||Dkv||L2(Ω)

.

We will mainly consider the space

H1(Ω) = v : v ∈ L2(Ω) and∂v

∂x1

, ...,∂v

∂xd

∈ L2(Ω),

and subspace H10(Ω) ⊂ H1(Ω), defined as

H10(Ω) = v : v ∈ H1(Ω) and v = 0 on ∂Ω.

We will also come across with the space H(div; Ω), which is

H(div; Ω) = v : v ∈ L2(Ω)d and divv ∈ L2(Ω).If we assume that ∂Ω is smooth then we can define the trace v/∂Ω of any functionv ∈ H1(Ω). The set of all traces of such functions gives rise to the Hilbert space

H12 (∂Ω):

H12 (∂Ω) = g : g = v/∂Ω for some v ∈ H1(Ω).

In the same manner for vector functions v in H(div; Ω), the set of normal traces(v · n)/∂Ω, where n denotes the outward normal vector to ∂Ω, gives rise to the dual

space H− 12 (∂Ω):

H− 12 (∂Ω) = q : q = (v · n)/∂Ω for some v ∈ H(div; Ω).

Some fundamental inequalities in Sobolev spaces will be also needed in our analysis.In particular, the Poincare-Friedrich’s inequality.

Lemma 2.1. (Poincare-Friedrich’s inequality). For all v ∈ H10(Ω) the following

inequality holds true:||u||L2

(Ω)≤ CF (Ω)|v|H1

(Ω)

where CF (Ω) is a constant that depends on Ω.

Lemma 2.2. For all v ∈ H1(Ω) the following equality holds true:∫

Ω

∇ · vudx = −∫

Ω

v · ∇udx +

∂Ω

v · nudsx

Page 13: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 2. MOTIVATION 12

2.2 Model problem

Let us consider a representative of elliptic equations, Laplace equation, which modelselectrostatic interaction and many other potential problems. So, our model problemwill be the following boundary value problem in two dimensional space:

−4u(x, y) = f(x, y) in Ω = (0, 2)× (0, 1),

u(x, y) = g(x, y) on ∂Ω.(2.1)

where 4u(x, y) =∂2u

∂2x(x, y) +

∂2u

∂2y(x, y) and f ∈ L2(Ω), g ∈ L2(∂Ω) are given

functions.

We denote by ∇v the gradient of v which is given by ∇v = [∂v

∂x,∂v

∂y]T ∈ R2. The

derivative of v in the direction of the outward normal n = [nx, ny]T ∈ R2 is denoted

by∂v

∂nand defined as

∂v

∂n= ∇v · n.

Now let us multiply the first equation of (2.1) by a test function v ∈ V := H1(Ω)and integrate over Ω. We will get:

−∫

Ω

4uvdx =

Ω

fvdx.

According to Green’s second formula [9] :

−∫

Ω

4uvdx =

Ω

∇u∇vdx−∫

∂Ω

∂u

∂nvdsx,

therefore, if we incorporate the essential boundary condition v = 0 on ∂Ω, we willget:

−∫

Ω

4uvdx =

Ω

∇u∇vdx.

Hence ∫

Ω

∇u∇vdx =

Ω

fvdx for all v ∈ V0,

whereV0 := v ∈ V : v = 0 on ∂Ω

Now, if we consider Vg = u ∈ V : u = g on ∂Ω, then we can give the variationalformulation of the boundary value problem (2.1).

For given f ∈ L2(Ω), find u ∈ Vg such that∫

Ω

∇u∇vdx =

Ω

fvdx for all v ∈ V0. (2.2)

Page 14: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 2. MOTIVATION 13

Now, let us introduce bilinear form a : V×V → R and linear form F : V → R givenby

a(u, v) =

Ω

∇u∇vdx and 〈F, v〉 =

Ω

fvdx. (2.3)

Then the variational problem (2.2) can be rewritten in the following way:

For given f ∈ L2(Ω), find u ∈ Vg such that

a(u, v) = 〈F, v〉 for all v ∈ V0. (2.4)

Next, we will formulate the fundamental theorem in the theory of variational prob-lems, the well-known Lax-Milgram theorem [9], which will provide us with existenceand uniqueness of the solution of (2.4). For this, let us first define some concepts.

Definition 2.3. A bilinear form a(·, ·) on a normed linear space H is said to bebounded or continuous, if ∃C < ∞ such that

|a(u, v)| ≤ C||u||H||v||H for all u, v ∈ H.

Definition 2.4. A bilinear form a(·, ·) on a normed linear space H is said to becoercive on V ⊂ H, if ∃α > 0 such that

|a(u, u)| ≥ α||u||2H for all u ∈ V.

Definition 2.5. A linear form 〈F, ·〉 on a normed linear space H is said to bebounded or continuous on H, if ∃C < ∞ such that

|〈F, v〉| ≤ C||v||H for all v ∈ V.

Now, having these definitions we can state Lax-Milgram’s theorem.

Theorem 2.6. Let H be a Hilbert space and V a closed subspace of H. Assume that

• a(·, ·) is a bounded (continuous) bilinear form on H

• a(·, ·) is coercive on V

• 〈F, ·〉 is a bounded (continuous) functional on V

then the the variational problem find u ∈ V such that

a(u, v) = 〈F, v〉 for all v ∈ V (2.5)

has a unique solution.

Page 15: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 2. MOTIVATION 14

So, in order to show that our model problem is well posed it is sufficient to showthat bilinear form a(·, ·) and functional 〈F, ·〉 satisfy the conditions of Lax-Milgram’stheorem.

• boundedness of a(·, ·): from the definition of a(·, ·) for all u, v ∈ V0 we have

|a(u, v)| =

∣∣∣∣∫

Ω

∇u∇vdx

∣∣∣∣

(triangle inequality) ≤∫

Ω

|∇u||∇v|dx

(Cauchy-Schwartz inequality) ≤ (

Ω

|∇u|2dx)12 (

Ω

|∇v|2dx)12

≤ ||∇u||L2||∇v||L2

(with C1 = 1) ≤ C1||u||H1||v||H1

• coercivity of a(·, ·): from the definition of a(·, ·) for all u ∈ V0 we have

a(u, u) =

Ω

∇u∇udx

= ||∇u||2L2

(Friedrich’s inequality) ≥ 1

c2F

||u||2H1

• boundedness of 〈F, ·〉: from the definition of 〈F, ·〉 for all v ∈ V0 we have

〈F, v〉 =

∣∣∣∣∫

Ω

fvdx

∣∣∣∣

(triangle inequality) ≤ ∫Ω|fv|dx

(Cauchy-Schwartz inequality) ≤ (

Ω

|f |2dx)12 (

Ω

|v|2dx)12

≤ ||f ||L2||v||L2

≤ ||f ||L2||v||H1

(with C2 = ||f ||L2) ≤ C2||v||H1

Page 16: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 2. MOTIVATION 15

2.3 Finite Element discretization

So, now we know that our variational problem is well posed and the next step tosolve such a problem is discretization. The precess of discretization will be basedon a Galerkin approximation. In that case, an approximate solution in sought in afinite-dimensional subspace of the space in which the weak (variational) formulationis posed. With this from our continuous problem we will get a discrete problem, forwhich the conditions of Lax-Milgram are satisfied, which means that the discreteproblem is well posed as well.We choose a finite-dimensional subspace Vh ⊂ V0 and, after the homogenization ofproblem (2.4), the Galerkin approximation is the solution of the following problem:Find uh ∈ Vh such that

a(uh, vh) = 〈F, vh〉 for all vh ∈ Vh. (2.6)

If we now choose a basis for Vh, the discrete problem (2.6) will lead to a system oflinear equations. Let ϕN

i=1 be a basis of Vh. Then, assume

uh =N∑

i=1

uiϕi (2.7)

and choose vh = ϕj for j = 1, 2, ..., N . Since a(·, ·) is bilinear, if we substitute in(2.6) the expression for uh given by (2.7) and consider the special choice of vh wewill get

N∑i=1

a(ϕi, ϕj)ui = 〈F, ϕj〉 j = 1, 2, ..., N. (2.8)

Hence, we need to solve the linear system

Ku = f (2.9)

Matrix Kh = [a(ϕi, ϕj)] is called the stiffness matrix, and the right hand sidevector f = [〈F, ϕj〉] is called the load vector. So, what we need to evaluate is thevector of unknowns, also called the vector of degrees of freedom, u = [ui].

Now, the next question is that the subspace Vh has to be chosen in such a waythat the functions in V can be accurately approximated with functions in Vh. Solv-ing the system (2.9) is the next important step. So, another advantage could beif the Vh, or more precisely, the basis ϕN

i=1, is chosen in such a way, that thestiffness matrix K is as sparse as possible. A good choice is the method of Finite

Page 17: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 2. MOTIVATION 16

Elements. In this method the construction of suitable spaces Vh relies on trian-gulation, which splits the domain Ω into small disjoint regions of simple geometricshape, such as triangles or quadrangles in R2. In that sense the subscript h refers tothe characteristic size of these regions. Under certain assumptions, which preventthe triangulation from being degenerating (for instance, the size of the angles intriangles is bounded from below so that they do not become sharp), the finer thetriangulation, the closer a finite element Galerkin solution to the exact solution.

Usually the functions in a finite element spaces Vh arise from a polynomial onthe elements on the triangulation. Each polynomial defined on a given region isuniquely determined by its values (and maybe also the values of its derivatives)at some nodal points, usually the vertices of the region. So, every function in Vh

is then determined by a set of values at nodal points. A simple example of finiteelement spaces is the space formed by continuous functions linear on triangles in R2.This space is refereed to P1 and the elements of this space are uniquely determinedby their function values at the vertices of the triangulation.

The standard basis for a finite element space is the one in which each of the basisfunctions has exactly one degree of freedom equal to 1 and the rest 0. In that casethe unknowns in the linear system arising from the discretization are directly thedegrees of freedom of the Galerkin approximation. The overlaps of the supports ofthe basis functions are small, which causes the stiffness matrix to be sparse. On theother hand in our case, due to the properties of the bilinear form, stiffness matrixis also symmetric and positive definite.

So, the first thing we will do is triangulation of our rectangular domain Ω =[0, 2] × [0, 1]. We take h = 1

Nand create a uniform meshes with grid points

x1, x2, ..., xi, ...., xN in x-direction and y1, y2, ..., yj, ...., yN in y-direction. By doingthis we generate N2 nodal points x = (xi, yj) ∈ R2, i, j = 1, 2, ..., N . We numberthese nodal points xk, k = 1, ..., N2 in a convenient way so that the stiffness matrixwill turn out to be as sparse as possible (this is linked to the fact that the supportsof basis functions overlap for finite number of triangles).

For the simplicity of the analysis for our model problem we will take linear basisfunctions which are defined by the following property:

ϕi(xj) =

0 if i 6= j1 if i = j

If we have the basis functions, then we can compute the stiffness matrix and the

Page 18: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 2. MOTIVATION 17

corresponding load vector. More detailed description of finite element method andsome issues of its implementation can be found in [27]. And the final step would beto solve the resulting system using some solver.

This is so to say the ”direct” approach for solving the discrete problem (2.6).Another approach could be to use a domain decomposition method [12] for the sameproblem. A natural question would be why do we need to use a domain decomposi-tion method? The point is that in real applications the domain Ω is not as regularas it is in our model problem or the resulting linear system is quite large. Theapproach of domain decomposition methods is to divide the original problem intoa number of smaller subproblems. Sometimes such a division arises from breakingup a complicated geometry, and sometimes the division is done in artificial manner.The subproblems are much smaller and easier to solve, and very often a parallelcomputation can be exploited.

The choice of discretization method and the choice of a method to solve the re-sulting linear system have to be done taking into account the nature of the problem.Issues like, efficiency and accuracy, strongly depend on the choice of the methodsand most of the time there is a trade-off involved. What we are interested in this pa-per is a particular discretization method with a combination of a particular iterativemethod. In the coming chapters first we will introduce the HSS iterative method forsolving linear systems and then we will present the domain decomposition method,called FETI, which later will be applied to our model problem. Our aim is to ana-lyze how good is the HSS Iterative method applied to the system which comes fromthe FETI method. Though our model problem is quite simple, but hopefully it willallow us to draw some conclusions about the HSS iterative method as a potentialmethod to solve systems arising form FETI.

Page 19: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

Chapter 3

Hermitian and skew-Hermitiansplitting methods

In this chapter we will study an iterative method for solving large non-Hermitianpositive definite system of equations based on the Hermitian and skew-Hermitian(HSS) splitting of the coefficient matrix. This method was introduced in [2]. Bothexact and the inexact versions of the method will be discussed. We will state themain results of the general method and then consider the method for a special casewhen the coefficient matrix is given as a saddle-point matrix. A preconditioningstrategy for Krylov subspace methods [6], [28] based on the Hermitian and skew-Hermitian splitting of the coefficient matrix will be presented. We will concludethe chapter by considering recently introduced Accelerated Hermitian and skew-Hermitian method [14] and the corresponding convergence analysis will be given.

3.1 HSS iteration method

Linear systems of equation of form

Ax = b (3.1)

where A ∈ Rn×n is nonsingular and x, b ∈ Rn , appear in many areas of scientificcomputing, for instance after discretization of partial differential equations. Manyiterative methods for solving linear systems are based on an efficient splitting ofthe coefficient matrix A [8]. We will study an iterative method based on particularsplitting of A into its Hermitian and skew-Hermitian parts. As we consider the

18

Page 20: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS19

case of real coefficient matrix, let us split matrix A into its symmetric and skew-symmetric parts:

A = H + S (3.2)

where H = 12(A+AT ) is the symmetric part of A and S = 1

2(A−AT ) is the skew-

symmetric part. Using this splitting we consider the following two-step iteration,which is called HSS iteration:

Algorithm 1. The HSS iteration method. Given an initial guess x(0), fork = 0, 1, 2, ..., until x(k) converges, compute

(αI +H)x(k+ 1

2) = (αI − S)x(k) + b,

(αI + S)x(k+1) = (αI −H)x(k+ 12) + b,

(3.3)

where α is a given positive constant.

As it can be seen each iterate of the HSS iteration alternates between the sym-metric part H and the skew-symmetric part S. It should be mentioned that we canalso reverse the roles of the matrices H and S in the HSS iteration in the sense thatwe first have to solve the system of linear equations with coefficient matrix αI + Sand afterwards the system of linear equations with coefficient matrix αI +H.

Before we study the convergence properties of the HSS iteration let us notethat at each half-step of the iteration we need to solve exactly two n × n systemswith matrices αI +H and αI + S. But solving exactly the two subsystems is notpractical. For this reason in actual applications one can use CG method to solvethe first system with the coefficient matrix αI +H and for the second system withmatrix αI +S one can employ some Krylov subspace method with some prescribedaccuracy. Of course, there are other possible choices for the inner iteration, forinstance multigrid methods or multilevel methods. The idea of solving the twosubsystems iteratively leads to the inexact version of the HSS algorithm, which iscalled Inexact Hermitian and skew-Hermitian (IHSS) iteration method. Now wewill state the main theorem of the current section:

Theorem 3.1. Let A ∈ Rn×n be a positive definite matrix, let H = 12(A+AT ) and

S = 12(A − AT ) be its symmetric and skew-symmetric parts, and let α > 0. Then

the iteration matrix M(α) of the HSS iteration is given by

M(α) = (αI + S)−1(αI −H)(αI +H)−1(αI − S)

Page 21: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS20

and its spectral radius ρ(M(α)) is bounded by

σ(α) ≡ maxλi∈λ(H)

|α− λi||α + λi|

where λ(H) is the spectral set of the matrix H. Therefore, it holds that

ρ(M(α)) ≤ σ(α) < 1 ∀α > 0

i.e., the HSS iteration converges to the unique solution x∗ ∈ Rn of the system oflinear equations Ax = b.

Proof [2]. Let us first rewrite the two-step HSS iteration method in a fixed point

form. For this we eliminate x(k+ 12) from the second equation of (3.3) by using the

first one. We will get:

xk+1 = Tαxk + c, (3.4)

whereTα := (S + αI)−1(αI −H)(H + αI)−1(αI − S), (3.5)

andc := (S + αI)−1[I + (αI −H)(H + αI)−1]b. (3.6)

Now, from the general theory of iterative methods we know that in order to showthat the method converges it is enough to show that ρ(M(α)) < 1. Let us firstnote that, as A is nonsingular and α > 0, the matrices αI + H and αI + S arenonsingular, therefore their inverse exists, and the matrix M(α) given by (3.5) iswell defined.Observe that M(α) is similiar to B(α), which is given by

B(α) = (αI −H)(αI +H)−1(αI − S)(αI + S)−1

Indeed, it is ease to see that M(α) = (αI +S)−1B(α)(αI +S). Therefore, from thesimilarity invariance of the matrix spectrum, we have ρ(M(α)) = ρ(B(α)). Hence

ρ(M(α)) = ρ(B(α))= ρ((αI −H)(αI +H)−1(αI − S)(αI + S)−1)≤ ||(αI −H)(αI +H)−1(αI − S)(αI + S)−1||2≤ ||(αI −H)(αI +H)−1||2||(αI − S)(αI + S)−1||2

Page 22: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS21

Now, consider Q(α) = (αI − S)(αI + S)−1. Having that ST = −S and matrices(αI + S) and (αI − S) commute, we can write that

Q(α)TQ(α) = ((αI − S)(αI + S)−1)T (αI − S)(αI + S)−1

= ((αI + S)−1)T (αI − S)T (αI − S)(αI + S)−1

= ((αI + S)T )−1(αI − S)T (αI − S)(αI + S)−1

= (αI − S)−1(αI + S)(αI − S)(αI + S)−1

= (αI − S)−1(αI − S)(αI + S)(αI + S)−1

= I.

Therefore, matrix Q(α) is an orthogonal matrix (Q(α) is also called Cayley trans-form of S). This means that ||Q(α)||2 = 1. So

ρ(M(α)) ≤ ||(αI −H)(αI +H)−1||2||(αI − S)(αI + S)−1||2

≤ ||(αI −H)(αI +H)−1||2

= maxλi∈λ(H)

|α− λi||α + λi|

Since λi > 0(i = 1, 2, ..., n) and α is a positive constant, we can conclude that

ρ(M(α)) ≤ σ(α) < 1

which proves the theorem.2

We should remark that from theorem 3.1 we can see that the convergence speedof the HSS iteration is bounded by σ(α), which depends only on the spectrum ofthe symmetric part H, but does not depend on the spectrum of the skew-symmetricpart, on the spectrum of A or on eigenvectors of H,S and A. Another remark isthat if the maximum and minimum eigenvalues of H are known, then the optimalparameter α for σ(α), which is the upper bound of ρ(M(α)), can be obtained. Thisis given in the following corollary.

Corollary 3.2. Let A ∈ Rn×n be positive definite matrix, let H = 12(A +AT ) and

S = 12(A−AT ) be its symmetric and skew-symmetric parts, and let γmin and γmax

be the minimum and maximum eigenvalues of the matrix H respectively, and let αbe a positive constant. Then

α∗ ≡ arg minα

max

γmin≤λ≤γmax

|α− λ||α + λ|

=√

γminγmax

Page 23: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS22

and

σ(α∗) =

√γmax −√γmin√γmax +

√γmin

=

√κ(H)− 1√κ(H) + 1

where κ(H) is the spectral condition number of H.

Note that in the above corollary the optimal parameter α∗ minimizes only theupper bound σ(α) of the spectral radius itself. It is clear that the asymptotic rate ofthe convergence of the alternating iteration heavily depends on the spectral radiusof the iteration matrix Tα, so it makes sense to try to find α such that ρ(Tα) is assmall as possible. In general finding such α is a difficult problem.

3.2 HSS as a preconditioner for Krylov subspace

methods

Even with the optimal choice of α the convergence of stationary iteration (3.3) istypically slow for the method to be competitive. In this small subsection we willshow that the iterative method can be used in order to accelerate the convergenceof other efficient iterative methods such as Krylov subspace methods [6], [28]. Forthis let us consider the following corollary.

Corollary 3.3. There is a unique splitting A = M − N with M non singularsuch that the iteration matrix Tα is the matrix induced by the splitting,namely, Tα =M−1N = I −M−1A. An easy calculation shows that

M≡Mα =1

2α(αI +H)(αI + S) (3.7)

It is therefore possible to rewrite the iteration (3.3) in the correction form:

xk+1 = xk +M−1α rk, rk = b−Axk

From the corollary above it follows that the linear system (3.1) is equivalent to(has the same solution as) the linear system

(I − Tα)x = M−1α Ax = c

where c = M−1α b. In other words, this equivalent system is a preconditioned system

which can be solved with Krylov subspace method like GMRES to accelerate theconvergence of the iteration. Hence, the matrix Mα can be seen as a preconditioner

Page 24: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS23

for GMRES. Equivalently, we can say that GMRES can be used to accelerate theconvergence of the alternating iteration applied to Ax = b.

The factor1

2αin (3.7) has no effect on the preconditioned system and as a pre-

conditioner we can use Mα = (αI +H)(αI + S). Now, under the assumptions ofTheorem 3.1 , since M−1

α A = I − Tα, we can see that for all α > 0 the eigenval-ues of the preconditioned matrix M−1

α A (or the right preconditioned AM−1α ) are

entirely contained in the open disc of radius 1 centered at (0,1). In particular, thepreconditioned system is positive stable. Note that the smaller the spectral radiusof Tα, the more clustered the eigenvalues of the preconditioned matrix (around 1 ).Very often a clustered spectrum leads to a rapid convergence of GMRES.

Choosing α in a way that the spectral radius of the iteration matrix is minimizeddoes not necessarily imply that the same α is the best choice if the algorithm is usedas a preconditioner for a Krylov subspace method. In certain problems in can beshown that when α is chosen sufficiently small the alternating iteration results inh-independent preconditioner for GMRES, and the spectral radius is very close to1 [29].

Also, minimizing the spectral radius, or even the number of GMRES iterations,does not imply optimal performance in CPU time. It is clear that an efficient imple-mentation of the method requires that two subsystems in (3.3) be solved inexactly.And the choice of α can influence the cost of the solves for corresponding subsys-tems. Large values of α will make the iterative solution of subsystems easy. Buton the other hand if α −→ ∞ as well as when α −→ 0 then the nonzero eigenval-ues of iteration matrix Tα approach 1. In that case the convergence of the outeriteration will slow down. So we see that like in many cases one has to face a trade-off approach. So if we define the ”optimal” value of α as the one that minimizesthe total amount of work required for computing an approximate solution, this willnot necessary be the same as the value of α that minimizes the number of outeriterations.

3.3 IHSS iteration method

As mentioned before solving the two subsystems of (3.3) exactly is quite impracticaland it is reasonable to solve the subsystems using some iterative methods. Thisresults in the following IHSS iteration for solving linear equations (3.1).

Page 25: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS24

Algorithm 2. The IHSS iteration method Given an initial guess x(0), fork = 0, 1, 2, ... and given tolerances ηk, εk until x(k) converges,

1. approximate the solution of (αI + H)z(k) = r(k) where r(k) = b − Ax(k) byiterating until z(k) is such that the residual p(k) = r(k) − (αI +H)z(k) satisfies

||p(k)|| ≤ εk||r(k)||

and then compute x(k+ 12) = x(k) + z(k)

2. approximate the solution of (αI+S)z(k+ 12) = r(k+ 1

2) where r(k+ 1

2) = b−Ax(k+ 1

2)

by iterating until z(k) is such that the residual q(k+ 12) = r(k+ 1

2)−(αI +H)z(k+ 1

2)

satisfies||q(k+ 1

2)|| ≤ ηk||r(k+ 1

2)||

and then compute z(k+1) = x(k+ 12) + z(k+ 1

2)

In [2] the analysis of IHSS in slightly general terms is given, and, as a mainresult, the following theorem is stated.

Theorem 3.4. Let A ∈ Rn×n be a positive definite matrix, let H = 12(A + AT )

and S = 12(A−AT ) be its symmetric and skew-symmetric parts, and let α > 0. If

xk is an iterative sequence generated by the IHSS iteration method and if x∗ isthe exact solution of the system of linear equations (3.1), then it holds that

|||xk+1 − x∗||| ≤ (σ(α) + θρηk)(1 + θεk)|||xk − x∗|||, k = 0, 1, 2...,

whereρ = ||(αI + S)(αI +H)−1||2, θ = ||A(αI + S)−1||2

and|||x||| is defined as |||x||| := ||(αI + S)x||2 ∀x ∈ Cn.

In particular, if (σ(α) + θρηmax)(1 + θεmax) < 1 , then the iterative sequence xkconverges to x∗, where εmax = maxkεk and ηmax = maxkηk.

It should be mentioned that the tolerances εk and ηk are not required to approachzero as k increases in order to get the convergence of IHSS iteration, but are requiredto approach zero in order to asymptotically recover the original convergence rate ofthe HSS iteration. The following theorem presents one possible way of choosing thetolerances εk and ηk such that the original convergence rate of the two-step splittingiterative scheme can be asymptotically recovered [2].

Page 26: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS25

Theorem 3.5. Let the assumptions in Theorem 3.4 be satisfied. Suppose that bothτ1(k) and τ2(k) are nondecreasing and positive sequences such that τ1(k) ≥ 1 ,τ2(k) ≥ 1 and limk−→∞ sup τ1(k) = limk−→∞ sup τ2(k) = +∞ , and that both δ1 andδ2 are real positive constants in the interval (0, 1) satisfying

εk ≤ c1δτ1(k)1 and ηk ≤ c2δ

τ1(k)2 k = 0, 1, 2, ...,

with c1 and c2 nonnegative constants. Then it holds that

|||xk+1 − x∗||| ≤ (√

σ(α) + ωθδτ(k))2|||xk − x∗|||, k = 0, 1, 2, ...,

where ρ and θ are defined as in Theorem 3.4 and τ(k) , δ , ω are defined as

τ(k) = minτ1(k), τ2(k), δ = max δ1, δ2 , ω = max

c1c2ρ,1

2√

σ(α)(c1σ(α) + c2ρ)

.

In particular, we have

limk−→∞

sup|||xk+1 − x∗||||||xk − x∗||| = σ(α)

which means that the convergence rate of the IHSS iteration method is asymptoticallythe same as that of the HSS iteration method.

3.4 HSS for generalized saddle-point problems

In the previous subsections we have considered the HSS/ IHSS method for solvinggeneral linear system of equations. In this subsection we will specify the linearsystem, namely we will consider the case when the system is given as a saddle-pointsystem. Such systems appear quite often in applications, including computationalfluid dynamics [15], [16], [24], [17], mixed finite element approximation of ellipticPDEs [18], [19], [25], electrical networks, optimization [20], [23], [22], [21], andsolving them efficiently is an important issue. Nowadays there are many methodsavailable for solving such systems. A very detailed and interesting overview onsaddle-points problems and solvers for such systems is given in [11].We consider the solution of system of linear equations with the following 2×2 blockstructure:

[A BT

B −C

] [up

]=

[fg

](3.8)

with A ∈ Rn×n , B ∈ Rm×n, C ∈ Rm×m, f ∈ Rn, g ∈ Rm and m ≤ n. We willassume that:

Page 27: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS26

• A has positive semidefinite symmetric part H = 12(A + AT ),

• rank(B) = m,

• ker(H) ∩ ker(B) = 0,• C is symmetric positive semidefinite.

These assumptions guarantee existence and uniqueness of the solution. Veryoften A is symmetric positive definite but in some cases A is either symmetric andsingular (i.e., only positive semidefinite), or nonsymmetric with positive definitesymmetric part H. When A is symmetric positive (semi-)definite, the coefficientmatrix in (3.8) is symmetric indefinite, and indefinite solvers can be used. Alterna-tively, it is possible instead of (3.8) to solve the equivalent nonsymmetric system

[A BT

−B C

] [up

]=

[f−g

](3.9)

orAx = b

where A =

[A BT

−B C

], x = [uT , pT ]T and b = [fT ,−gT ]T .

In the following theorem we will summarize some properties of A which are quiteuseful and independent of the fact weather A is symmetric or not.

Theorem 3.6. Let A ∈ R(n+m)×(n+m) be the coefficient matrix in (3.9). AssumeH = 1

2(A + AT ) is positive semidefinite, B has full rank, C = CT is positive

semidefinite, and ker(H) ∩ ker(B) = 0. Let σ(A) denote the spectrum of A.Then

• (a) A is nonsingular,

• (b) A is semipositive real: < Av, v >= vTAv ≥ 0 for all v ∈ R(n+m),

• (c) A is positive semistable, that is , the eigenvalues of A nonnegative realpart: <(λ) ≥ 0 for all λ ∈ σ(A),

• (d) If, in addition, H = 12(A + AT ) is positive definite, then A is positive

stable: <(λ) > 0 for all λ ∈ σ(A).

Page 28: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS27

Proof [29].

• (a)Let x = [uT , pT ]T be such that Ax = 0. Then

Au + BT p = 0 and −Bu + Cp = 0 (3.10)

Now, Ax = 0 implies that xTAx = uT Au + pT Cp = 0. This and the fact thatboth uT Au and pT Cp are nonnegative imply uT Au = pT Cp = 0. On the otherhand, uT Au = uT Hu = 0, which means that u ∈ ker(H) since H is symmet-ric positive semidefinite. In the same way, pT Cp = 0 and C being positivesemidefinite implies Cp = 0. The second equation in (3.10) and Cp = 0 givesBu = 0 . So, we get that u ∈ ker(B). Since ker(H) ∩ ker(B) = 0, we canconclude that u = 0. Then, from the first equation in (3.10) we can see thatBT p = 0. As B has full column rank, BT p = 0 implies p = 0. Therefore,Ax = 0 has the only solution x = 0, which means that A is nonsingular.

• (b)Now, for any v ∈ R(n+m) we have vTAv = vTHv where H is the symmetricpart of A, namely

H =1

2(A+AT ) =

[H OO C

].

As H and C are positive semidefinite ,then clearly H =

[H OO C

]is positive

semidefinite, hence vTAv ≥ 0.

• (c)Let (λ,v) be an eigenpair of A with ||v||2 = 1.Then v∗Av = λ and (v∗Av)∗ =v∗Av = λ.Now, on the other hand

v∗(A+AT )v = <(v)T (A+AT )<(v) + =(v)T (A+AT )=(v).

We have already showed that A is semipositive real therefore the quantity<(v)T (A+AT )<(v) + =(v)T (A+AT )=(v) is nonnegative. Hence,

<(λ) =1

2v∗(A+AT )v ≥ 0.

• (d)Assume that (λ,v) be an eigenpair of A with v = [uT , pT ]T . Then

<(λ) = u∗Hu+p∗Cp = <(u)T H<(u)+=(u)T H=(u)+<(p)T C<(p)+=(p)T C=(p).

Page 29: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS28

Since H is assumed to be positive definite, and the quantity above is nonneg-ative, then <(λ) can be zero if and only if u = 0. But if u = 0 then, from thesecond equation in (3.10), we get BT p = 0. Hence p = 0, as B has full columnrank. So, we have that u = p = 0, which means that v = [uT , pT ]T = 0.But v is assumed to be an eigenvector, therefore we get a contradiction. Thisimplies that <(λ) > 0.

2

The theorem above shows that changing the sign of the last m equations in (3.8)might effect the fact that A is symmetric (in case if A is symmetric), but we gainpositive (semi)-definitness. Having the coefficient matrix positive (semi)-definitecan be an advantage when using Krylov subspace methods, for instance GMRES.So, under the conditions of the theorem the existence of the solution is guaranteedand some advantageous properties of the coefficient matrix are provided.Now, let us solve the linear system (3.9) by using the HSS iterative method intro-duced in the first subsection of current chapter. According to the algorithm of thetwo-step iteration, what we need to solve at each iteration is the following system

(αI +H)x(k+ 1

2) = (αI − S)x(k) + b,

(αI + S)x(k+1) = (αI −H)x(k+ 12) + b,

(3.11)

where H and S are the symmetric and skew-symmetric parts of A respectively,namely

A =

[A BT

−B C

]=

[H OO C

]+

[S BT

−B O

]= H + S,

where:

• H = 12(A+AT ),

• S = 12(A−AT ),

• H = 12(A + AT ),

• S = 12(A− AT ).

Now, as H =

[H OO C

], we can conclude that A will be positive real (i.e.,H is

symmetric positive definite ) if and only if both H and C are symmetric positivedefinite (SPD). Unfortunately, that is not the case in most of the applications. So theconvergence theory discussed in first subsection is not applicable as in the context of

Page 30: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS29

generalized saddle-point problems the matrix H is only positive semidefinite and, ingeneral, singular. In this case more detailed analysis is required, since for matriceswhose symmetric part is positive semidefinite and singular, the alternating iterationis not convergent in general. For the analysis we will consider the fixed pointformulation of the HSS iteration:

xk+1 = Tαxk + c, (3.12)

whereTα := (S + αI)−1(αI −H)(H + αI)−1(αI − S) (3.13)

andc := (S + αI)−1[I + (αI −H)(H + αI)−1]b. (3.14)

The following theorem shows that for a large class of generalized saddle-pointproblems the alternating iteration converges.

Theorem 3.7. Consider problem (3.9) and assume that A is positive real, C sym-metric positive semidefinite, and B has full rank. Then the iteration (3.11) is un-conditionally convergent; that is, ρ(Tα) < 1 for all α > 0.

Proof [29]. Consider the matrix

Kα := (αI −H)(H + αI)−1(αI − S)(S + αI)−1 = RU ,

with R := (αI −H)(H + αI)−1 and U := (αI − S)(S + αI)−1. Let us first noticethat Tα is similar to Kα. Indeed, it is easy to see that

Tα = (S + αI)−1Kα(S + αI).

Next, since (αI −H) and (H + αI)−1 commute, and H is symmetric, it is easy toshow that R is symmetric. Similarly, since (αI − S)and (S + αI)−1 commute andST = −S, one can show that U is orthogonal matrix. Indeed,

RT = ((αI −H)(H + αI)−1)T

= (H + αI)−T (αI −H)T

= (H + αI)−1(αI −H)= (αI −H)(H + αI)−1

= R,

Page 31: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS30

therefore RT = R. Now, to show that U is orthogonal we need to show thatUUT = I.

UUT = (αI − S)(S + αI)−1((αI − S)(S + αI)−1)T

= (αI − S)(S + αI)−1(S + αI)−T (αI − S)T

= (αI − S)(S + αI)−1(−S + αI)−1(αI + S)= (S + αI)−1(αI − S)(−S + αI)−1(αI + S)= (S + αI)−1(αI + S)= I.

Now, since R is symmetric, then it is orthogonally similar to (n + m) × (n + m)diagonal matrix D given by:

D =

α−µ1

α+µ1α−µ2

α+µ2

. . .α−µn

α+µnα−υ1

α+υ1α−υ2

α+υ2

. . .α−υm

α+υm

where µ1, µ2, ..., µn are the (positive) eigenvalues of H and υ1, υ2, ..., υm are the(nonnegative) eigenvalues of C. This means that there is an orthogonal matrix Vof order (n + m) such that

VTRV = D =

[D1 OO D2

],

where D1 and D2 are diagonal matrices of order n and m respectively. Observe that

|α− µi||α + µi| < 1 for 1 ≤ i ≤ n and

|α− υi||α + υi| < 1 for 1 ≤ i ≤ m.

Now, since V is orthogonal we can see thatRU is orthogonally similar to (VTRV)(VTUV).Indeed,

VTRUV = (VTRV)(VTUV) = DQ,

where Q := VTUV , as a product of orthogonal matrices, is an orthogonal matrix.We had that the iteration matrix Tα is similar to RU and, since RU is similar toDQ, we can conclude that Tα is similar to DQ. This means:

ρ(Tα) = ρ(DQ) = ρ(QD).

Page 32: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS31

If we manage to show that ρ(QD) < 1 for all α > 0 then the theorem will be proved.For this let us partition Q in the following way:

Q =

[Q11 Q12

Q21 Q22

].

Then the matrix product QD can be written as:

QD =

[Q11D1 Q12D2

Q21D1 Q22D2

].

Now, assume that λ ∈ C is an eigenvalue of QD, and x ∈ Cn+m is the correspondingeigenvector with ||x||2 = 1. We will assume that λ 6= 0, otherwise the proof isfinished. We want to show that |λ| < 1. Now,

QDx = λx implies Dx = λQTx,

therefore, by taking the norms and recalling that Q is an orthogonal matrix, we willget:

||Dx||2 = |λ|||QTx||2 = |λ|||x||2 = |λ|.Hence

|λ|2 = ||Dx||22 =n∑

i=1

(α− µi

α + µi

)2xixi +n+m∑

i=n+1

(α− υi

α + υi

)2xixi ≤ ||x||2 = 1. (3.15)

So, we showed that |λ| ≤ 1. To prove that |λ| < 1 (strictly), we will show thatthere exists at least one i(1 ≤ i ≤ n) such that xi 6= 0. For this let us assume that

xi = 0 for all i(1 ≤ i ≤ n). Then the eigenvector x is of the form x =

[0x

]where

x ∈ Cm. Therefore,

QDx =

[Q11D1 Q12D2

Q21D1 Q22D2

] [0x

]=

[Q12D2xQ22D2x

]=

[0λx

], (3.16)

which means that Q12D2x = 0. Let us for a moment assume that Q12 has fullcolumn rank. Then Q12D2x = 0 implies that D2x = 0. But according to (3.16)λx = Q22D2x = 0. Since we have assumed that λ 6= 0 it must be x = 0. Therefore

x =

[0x

]=

[00

]= 0, which is a contradiction, as x is an eigenvector. This

means that if xi = 0 for all i(1 ≤ i ≤ n) then x = 0, therefore there exists at leastone i(1 ≤ i ≤ n) such that xi 6= 0.

Page 33: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS32

To finalize the proof we only need to show that Q12 has full column rank. Now, letus recall that Q = VTUV with

V =

[V11 OO V22

],

where V11 ∈ Rn×n is the orthogonal matrix that diagonalizes (αIn−H)(αIn +H)−1

and V22 ∈ Rm×m is the orthogonal matrix that diagonalizes (αIn − C)(αIn + C)−1.Also let us recall that

U = (αI − S)(S + αI)−1

[αIn − S −BT

B αIm

] [αIn + S BT

−B αIm

]=

[U11 U12

U21 U22

].

One can show that

U12 = [(αIn − S)(αIn + S)−1 + In]BT [αIm + B(αIn + S)−1BT ]−1.

Let us now show that −1 can not be an eigenvalue of an orthogonal matrix(αIn − S)(αIn + S)−1. Indeed, if for some (−1,x) with x ∈ Cn is an eigenpair of(αIn − S)(αIn + S)−1 then

(αIn − S)(αIn + S)−1x = −x ⇔(αIn + S)−1(αIn − S)x = −x ⇔

(αIn − S)x = −(αIn + S)x ⇔αInx− Sx = −αInx− Sx ⇔

x = −x ⇔x = 0.

But x = 0 contradicts with the assumption that x is an eigenvector, therefore weshowed that −1 can not be an eigenvalue of (αIn−S)(αIn +S)−1. This means that(αIn−S)(αIn +S)−1 + In is nonsingular. Now, (αIn +S)−1 is a positive real matrixas the inverse of a positive real matrix, B has full column rank, therefore B(αIn +S)−1BT is a positive real matrix. And this implies that αIm + B(αIn + S)−1BT isnonsingular. Now,

Q = VTUV =

V T11U11V11 V T

11U12V22

V T22U21V11 V T

22U22V22

,

which means that

Q12 = V T11U12V22 = −V11[(αIn−S)(αIn+S)−1+In]BT [αIm+B(αIn+S)−1BT ]−1V22.

Page 34: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS33

This shows that Q12 has a full column rank since V T11 and V22 are orthogonal and

BT has full column rank. So, we have proved the theorem.2

So, quite technical proof of the Theorem (3.7) insures that the HSS iterationmethod applied to a generalized saddle-point problem will converge, under certainconditions on matrices A,B and C. Luckily, in many applications these conditionsare satisfied, so HSS iterative method can be seen as an potential solver for saddle-point problems.Let us now have a closer look at the system (3.11), taking into account that A hasa special block structure and make some general observations. The first half step of(3.11) requires to solve a system of two uncoupled linear systems:

(H + αIn)uk+ 12 = αuk − Suk + f −BT pk,

(C + αIm)pk+ 12 = αpk − g + Buk.

(3.17)

Since both systems in (3.17) are symmetric positive definite (SPD), any solver forSPD solvers can be applied, for instance preconditioned conjugate gradient method.It should be mentioned that addition of a positive term to the main diagonal ele-ments of H and C considerably improves the condition numbers, which of coursealso results in better convergence rate of iterative methods applied to (3.17). Ifwe normalize H in the sense that its largest eigenvalue is equal to 1, then for thespectral condition number of H + αIn we will have:

κ(H + αIn) =1 + α

λmin(H) + α≤ 1 +

1

α.

Remark, that even if α is reasonably small, for instance α = 0.1, the conditionnumber is small as well (κ(H + αIn) ≤ 11). Unless the value of α is very small,CG applied to method applied to (3.17) will converge rapidly, independent of thenumber n of unknows. Now, let us consider the second half-step of the algorithmgiven by (3.11). It requires a solution of two coupled linear systems of the form

(αIn + S)uk+1 + BT pk+1 = (αIn −H)uk+ 12 + f ≡ fk,

−Buk+1 + αpk+1 = (αIm − C)pk+ 12 − g ≡ gk.

(3.18)

Solving this system is less trivial and might be done in several ways. For exampleone can eliminate uk+1 from the second equation using the first one, which willresult in smaller (order m) linear system of the form:

Page 35: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS34

[B(In + α−1S)−1BT + α2Inm]pk+1 = B(In + α−1S)−1fk + αgk. (3.19)

If the soultion of (3.19) is computed, then the vector uk+1 is can be computedby

uk+1 = (αIn + S)−1(fk −BT pk+1). (3.20)

Note that if S = O, then (3.19) is simplified to:

[BBT + α2Inm]pk+1 = B(In + α−1S)−1fk + αgk (3.21)

and

uk+1 =1

α(fk −BT pk+1).

As we will see in proceeding chapter, this will be the case for our application ofFETI domain decomposition method to the model problem. Moreover, in our caseBBT is sufficiently sparse, so system (3.21) can be solved using sparse Choleskyfactorization. In general, if BBT is not sparse enough, then other method, such asPreconditioned Conjugate Gradient (PCG), could be used.

As it has been already mentioned, the linear systems in (3.11) need not to besolved exactly. The inexact solves can considerably reduce the cost of each iterationthough at the expense of slower convergence. But it should be mentioned that ifalternating scheme is used as a preconditioner for Krylov subspace methods, theninexact solves are a good choice.

3.5 AHSS splitting iteration for saddle-point prob-

lems

In the previous section we considered the application of HSS iterative method tothe generalized saddle-point problems and in this section we will discuss the Accel-erated Hermitian and skew-Hermitian (AHSS) iteration method [14] for large sparsesaddle-point problem by making use of the HSS iterative method. We will brieflypresent the AHSS methods, including the algorithmic description of the methodsand unconditionally convergence property. Also we will point out the advantage ofthis class of methods over the HSS iterative methods.

Page 36: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS35

AHSS iteration methods are two-parameter versions of HSS iterative methods.Theoretical and practical analysis shows that AHSS iteration methods algorithmi-cally generalize the HSS methods, without any extra computational work, and theresulting iterative scheme turns out to be rapidly convergent. Moreover, the numer-ical sensitivity of the iterative schemes with respect to the iteration parameters isconsiderably decreased. As we will see, for all positive parameters the AHSS iter-ation methods converge unconditionally to the unique solution of the saddle-pointproblem.

We will consider a saddle-point problem:

Ax ≡[

A BT

−B O

] [yz

]=

[fg

]≡ b, (3.22)

with A ∈ Rn×n symmetric positive definite , B ∈ Rm×n full column rank, f ∈ Rn,g ∈ Rm and m ≤ n. As we have already seen, this assumptions guarantee theexistence of the solution of (3.22). In order to simplify the analysis, without any lossof generality, we transform the saddle-point problem (3.22) into an equivalent form[14]. For this, we let W ∈ Rn×n be a non-singular matrix such that W T AW = In.

For instance we can take W = A− 12 . Consider also another non-singular matrix

Z ∈ Rm×m and construct B, C , T , A, x and b in the following way:

B = W T BZ, C = Z−T Z−1, (3.23)

T =

[W OO Z

], A ≡ T TAT =

[In B

−BT O

], (3.24)

x ≡[

yz

]= T−1x =

[W−1yZ−1z

], b ≡

[fg

]= T Tb =

[W T fZT g

]. (3.25)

With this one can show that the saddle-point problem (3.22) is equivalent to

Ax = b (3.26)

Now, let us split the coefficient matrix A into its symmetric and skew-symmetricparts

A = H + S,

Page 37: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS36

with

H =1

2(A+ AT ) =

[In OO O

]and H =

1

2(A − AT ) =

[O B−BT O

].

We apply the HSS iteration technique and obtain the following iteration scheme:

(Λ + H)x(k+ 1

2) = (Λ− S)x(k) + b,

(Λ + S)x(k+1) = (Λ− H)x(k+ 12) + b,

(3.27)

where

Λ =

[αIn OO βIm

], with α and β positive constants.

As it can be seen the iterative scheme given by (3.27) is not the same as previouslypresented scheme (3.11), as it involves two arbitrary parameters α and β. Whenα 6= β the matrix Q(α, β) = (Λ + S)−1(Λ + S) is not unitary.If we now write down H and S explicitly then (3.27) can be rewritten as follows:

αA B

−BT βC

yk+1

zk+1

=

α(α−1)α+1

A −α−1α+1

B

BT βC

yk

zk

+

2αα+1

f

2g

, (3.28)

or (3.28) can be equivalently written as

yk+1

zk+1

= T (α, β)

yk

zk

+K(α, β)

f

g

, (3.29)

where

T (α, β) =

αA B

−BT βC

−1

α(α−1)

α+1A −α−1

α+1B

BT βC

(3.30)

and

K(α, β) =

αA B

−BT βC

−1

α+1I O

O 2I

. (3.31)

One can show that there exists a splitting of the coefficient matrix

A = M(α, β)−N (α, β)

Page 38: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS37

such that the iteration matrix T (α, β) = M−1(α, β)N (α, β). It turns out thatM(α, β) and N (α, β) that satisfy these conditions are given by

M(α, β) =

α+12

A α+12α

B

−12BT β

2C

, N (α, β) =

α−12

A −α−12α

B

12BT β

2C

. (3.32)

In fact, at each iteration we need to solve a linear system with coefficient matrixM′(α, β) or, equivalently, with M(α, β), where

M′(α, β) =

αA B

−BT βC

.

If we consider the block-triangular factorization of the matrixM(α, β), for an initial

guess x(0) = [y(0)T , z(0)T ]T ∈ Rn+m and two positive constants α and β the algorithmof AHSS reads as:

Algorithm 3. The AHSS iteration methodGiven an initial guess x(0), for k = 0, 1, 2, ..., until x(k) converges,

1. Compute the current residual vector

r(k) = f − (Ay(k) + Bz(k)), s(k) = g + BT y(k),

2. Compute the auxiliary vector

u(k) =2

α + 1r(k), v(k) = BT A−1u(k) + 2s(k),

3. Compute the update vector

(βC +1

αBT A−1B)w(k) = v(k), At(k) = u(k) −Bw(k),

4. Form the next iterate

y(k+1) = y(k) + t(k), z(k+1) = z(k) + w(k).

According to the algorithm, at each step we have to solve two subsystems oflinear equations with the coefficient matrix A and (βC + 1

αBT A−1B) which is the

Schur-complement of the matrix M′(α, β). If we recall that C is an arbitrary

Page 39: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS38

symmetric positive-definite matrix, we can choose it in a way that the matrix (βC +1αBT A−1B) can be easily invertible in order to solve the subsystems as efficient as

possible.Now, we will shortly present the main results of convergence analysis. We will startwith the following lemma which provides with explicit expressions for the eigenvaluesof iteration matrix T (α, β) [14].

Lemma 3.8. Consider the saddle-point problem (3.22) and assume that A ∈ Rn×n

is symmetric positive definite, B ∈ Rn×m has full column rank and α, β > 0 aregiven. Let C ∈ Rm×m be a symmetric positive definite matrix. If σk(k = 1, 2, ...,m)are the positive singular values of the matrix B ∈ Rn×m, given by (3.23), then theeigenvalues of the iteration matrix of AHSS iteration method T (α, β), defined in(3.30), are

• α− 1

α + 1with multiplicity n−m

and

• 1

(α + 1)(αβ + σ2k)

(α(αβ − σ2k)±

√(αβ + σ2

k)− 4α3βσ2k), k = 1, 2, ..., m.

It should be mentioned that the singular values of B are exactly the square rootsof the eigenvalues of the matrix C−1BT A−1B.

Lemma 3.9. Assume that the conditions of Lemma 3.8 are satisfied. If σk are thepositive singular values of the matrix B ∈ Rn×m in (3.30), then the iteration matrixT (α, β) of the AHSS iteration method has

• m− n eigenvalues λ with absolute value |λ| = |α− 1|α + 1

,

• 2m eigenvalues λ such that for k = 1, 2, ..., m

– if αβ + σ2k > 2α

√αβσk , then there exist two correspoding eigenvalues λ

such that

|λ| = α

1 + α

(|αβ − σ2

k|αβ + σ2

k

+

√1

α2− 4αβσ2

k

αβ + σ2k

),

|λ| = α

1 + α

∣∣∣∣∣|αβ − σ2

k|αβ + σ2

k

−√

1

α2− 4αβσ2

k

αβ + σ2k

∣∣∣∣∣

Page 40: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS39

– if αβ + σ2k ≤ 2α

√αβσk , then there are two eigenvalues λ such that

|λ| =√

α− 1

α + 1.

Using the lemmas above it is possible to show that the AHSS iterative methodconverges, namely the following theorem is hold true:

Theorem 3.10. Consider the saddle-point problem (3.22) and assume that A ∈Rn×n is symmetric positive definite, B ∈ Rn×m has full column rank and α, β > 0are given. Let C ∈ Rm×m be a symmetric positive definite matrix. Then the spectralradius ρ(T (α, β)) < 1 for any α and β positive constants, where

ρ(T (α, β)) =

max

1− α

1 + α,

α

1 + α

(|αβ − σ2

k|αβ + σ2

k

+

√1

α2− 4αβσ2

k

αβ + σ2k

)for α ≤ 1

max

√α− 1

α + 1,

α

1 + α

(|αβ − σ2

k|αβ + σ2

k

+

√1

α2− 4αβσ2

k

αβ + σ2k

)for α > 1

which means, that AHSS iteration converges to the exact solution of the saddle-pointproblem (3.22).

Proof [14]. First let us observe that

|1− α|1 + α

< 1 ∀α > 0 and

√α− 1

α + 1< 1 ∀α > 1.

Now, for k = 1, 2, ..., m when αβ + σ2k > 2α

√αβσk, we have

α

1 + α

∣∣∣∣∣|αβ − σ2

k|αβ + σ2

k

+

√1

α2− 4αβσ2

k

αβ + σ2k

∣∣∣∣∣ ≤ α

1 + α

(|αβ − σ2

k|αβ + σ2

k

+

√1

α2− 4αβσ2

k

αβ + σ2k

)

1 + α

( |αβ − σ2k|

αβ + σ2k

+1

α

)

1 + α

(1 +

1

α

)

So, if we take into account Lemma (3.9) then we have showed that ρ(T (α, β)) < 1for any α > 0 and β > 0.2

Page 41: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 3. HERMITIAN AND SKEW-HERMITIAN SPLITTING METHODS40

Following theorem presents the results on optimal iteration parameters and thecorresponding asymptotic convergence factor of the AHSS iteration.

Theorem 3.11. Consider the saddle-point problem (3.22) and assume that A ∈Rn×n is symmetric positive definite, B ∈ Rn×m has full column rank and α, β > 0are given. Let C ∈ Rm×m be a symmetric positive definite matrix. If σk(k =1, 2, ...,m) are the positive singular values of the matrix W T BZ ∈ Rn×m, andσmin = min1≤k≤mσk and σmax = max1≤k≤mσk, then, for the AHSS iterationmethod applied to the saddle-point problem (3.22), the optimal values of the itera-tion parameters α and β are given by:

α∗, β∗ = arg minα,β>0

ρ(T (α, β)) =

τ,σminσmax

τ

,

and corresponding ρ(T (α∗, β∗)) is

ρ(T (α∗, β∗)) =

√σmax −√σmin√σmax +

√σmin

≡4√

κ− 14√

κ + 1,

where κ = σ2max

σ2min

is the condition number of the matrix C−1BT A−1B and

τ =σmin + σmax

2√

σmaxσmin

≡ 1

2

(4√

κ +14√

κ

).

From Theorem (3.11) and Remark 4.3 in [30] it is known that the optimal con-

vergence rate of AHSS iteration method is24√

κand that of HSS iteration method is

approximately2√κ

. This means that AHSS converges remarkably faster than HSS

when the optimal iteration parameters are employed and κ À 1.

As in the case of HSS iteration method, AHSS iteration can be used as a precon-ditioner to accelerate Krylov subspace methods such as GMRES. Namely, matrixM(α, β) can be seen as a preconditioner for the system (3.22).

Page 42: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

Chapter 4

Finite Element Tearing andInterconnecting Method

In this chapter we will discuss the Finite Element Tearing and Interconnectingmethod. It is one of domain decomposition methods [12] for solving large systemsof linear equations arising from finite element discretization of elliptic partial differ-ential equations. We will present the derivation of the method which was first doneby Farhat and Roux [10]. Similar to most of the domain decomposition methods, inthis method the original problem is divided into number of subproblems which areeasier to solve because of their smaller size. We will give the saddle-point formula-tion of the FETI method, and the main algebraic properties of the method will bepresented as well [26].

4.1 About Domain Decomposition Methods

The class of domain decomposition methods has gained enormous popularity dur-ing the last decade. This methods follow the idea of ”divide -and -conquer”, whichmeans that they divide the original problem into a number of smaller problems.This division is done for different reasons. Sometimes such a division arises frombreaking up a domain with complicated geometry. In other cases, though, the divi-sion is more artificial. The subproblems are easier to solve because of their smallersize and often parallel computation can be used. This is quite important for theefficiency of computations.Domain decomposition methods can be seen from two different point of view. Oneis that they may arise from separation of a physical domain into regions. In theseregions the problem can be modeled by separate partial differential equations. On

41

Page 43: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 4. FINITE ELEMENT TEARING AND INTERCONNECTING METHOD42

the interfaces between the subdomains various conditions, such as continuity, areimposed. The other approach is to see domain decomposition methods as methodsfor solving large algebraic linear systems arising from the discretization of partialdifferential equations. In that sense, a domain decomposition method can be seenas an algebraic method, where the large system is subdivided into smaller problems,whose solutions can be used to generate a preconditioner for the large system.

4.2 Original FETI method

As we have mentioned before, Finite Element Tearing and Interconnecting (FETI)method is a domain decomposition method designed for solving systems arising fromfinite element discretization of elliptic partial differential equations. If we describethe method in general terms, it can be done as follows: a given domain is ”torn ”into non-overlapping subdomains where an incomplete solution of the primary fieldis first evaluated using a direct solver. Using Lagrangian multipliers intersubdomainfield continuity is enforced. As a result of such ”gluing” process a smaller size sym-metric dual problem is generated where the unknowns are the Lagrange multipliers,and which is solved by a preconditioned conjugate gradient (PCG) method.

To describe the method in more details, let us first introduce some notations andassumptions needed in this chapter. For u, v ∈ Rn, the inner product 〈u, v〉 = uT vis also interpreted as duality product. For a symmetric positive semidefinite matrixA , we denote ||u||A = 〈Au, v〉1/2, which is the seminorm induced by matrix A.Naturally, if A is positive definite then 〈Au, v〉1/2 is norm. We will also need thenotion of pseudoinverse of an operator which is given as follows:

Definition 4.1. Let A be a linear operator. A pseudoinverse A+ is any linearoperator such that, if a ∈ ImA then AA+a = a.

In general, a pseudoinverse is not unique. The algorithms discussed in this chap-ter will be invariant to a specific choice of the pseudoinverse. If A is a symmetricoperator on a finite dimensional space, then the pseudoinverse A+ can also be cho-sen symmetric. Indeed, if we consider the spectral decomposition of A,

A =∑

σ

σvσvTσ , Avσ = σvσ, vT

σ vσ = 1 (4.1)

Page 44: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 4. FINITE ELEMENT TEARING AND INTERCONNECTING METHOD43

then as a pseudoinverse A+ one can chose

A+ =∑

σ 6=0

1

σvσv

Now, let the domain Ω in R2 (R3) be decomposed into Ns non-overlapping sub-domains Ω1, Ω2, ..., ΩNs. Let ui be the vector of degrees of freedom for subdomainΩi corresponding to a conforming finite element discretization of an elliptic problem(for instance linear elasticity, Stokes problem) defined on Ω, such that each subdo-main is a union of some of the elements. Assume Ki and fi are the local stiffnessmatrices and the load vectors respectively, associated with the subdomain Ωi. Thenwe consider u, f and K defined as:

u =

u1

u2

...

uNs

, f =

f1

f2

...

fNs

, K =

K1 O . . . O

O K2 . . . O

. . . . . . . . . . . .

O O . . . KNs

. (4.2)

Depending on the boundary conditions and the location of the subdomain, the localstiffness matrix Ki is positive definite or positive semidefinite. A subdomain withoutsufficient essential boundary conditions (which is done in order to prevent the localstiffness matrix Ki from being singular) is called a floating subdomain. Now, denoteby Zi the matrix with linearly independent columns that generate the kernel of Ki,which means that ImZi = ker Ki. If we consider

Z =

Z1 O . . . O

O Z2 . . . O

. . . . . . . . . . . .

O O . . . ZNs

thenImZ = ker K and ker Z = 0.

For a single mesh point x ∈ Ω the algorithm assigns several degrees of freedomassociated with it, if it lies on the intersection of boundaries, which is called interface.

Page 45: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 4. FINITE ELEMENT TEARING AND INTERCONNECTING METHOD44

Now, let matrix B be such that constraint Bu = 0 expresses the condition thatfor each mesh node shared by more than one subdomain the values of degrees offreedom associated with that node coincide. Consider the space of all vectors ofdegrees of freedom, which we denote by W and the space of the vectors of values ofthe continuity constraint, denoted by Λ . With this notations we can see that

K : W → W and B : W → Λ.

In fact, the problem that we need to solve is the following minimization problemsubject to intersubdomain continuity conditions:

E(u) =1

2uT Ku− fT u → min subject to Bu = 0, u ∈ W. (4.3)

If we assume thatker B ∩ ker K = 0 (4.4)

then the solution of (4.3) will be unique. For describing the FETI algorithm weneed some more notations, which ar given in (4.5).

G = BZ,

F = BK+BT ,

d = BK+f,

e = ZT f,

P = I −G(GT G)−1GT .

(4.5)

Later on in this chapter we will show that P is well defined, namely the matrixGT G is invertible.

As it was originally done by Farhat and Roux [10] Lagrange multipliers areintroduced to enforce the continuity of the solution. With this, solving constraintminimization problem (4.3) leads to the following system of equations:

Ku + Btλ = fBu = 0

(4.6)

Now, let us note that a solution u of the first equation in (4.6) exists if and only if

f −BT λ ∈ ImK. (4.7)

Page 46: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 4. FINITE ELEMENT TEARING AND INTERCONNECTING METHOD45

Then u should have a form

u = K+(f −BT λ) + Zα, (4.8)

where α still has to be specified. If we substitute the expression for u from (4.8)into the second equation in (4.6), we will get:

BK+(f −BT λ) + BZα = 0. (4.9)

Multiplying (4.9) by P , which was defined in (4.5), and taking into account (4.7),we will obtain that λ satisfies the following system of equations:

P (Fλ− d) = 0

GT λ = e(4.10)

with e, F, G defined in (4.5).Let us now show that the orthogonal projection P is well defined.

Lemma 4.2. The matrix GT G is invertible, namely, (GT G)−1 exists.

Proof[26]. Let Gw = BZw = 0. Then Zw ∈ ker B. From definition of Zit follows that Zw ∈ ker K. As we have assumed that ker K ∩ ker B = 0, thenZw = 0. Since we also assumed that Z is of full rank, then we immediately obtainthat w = 0. This means that G is invertible, as well as GT , and therefore, matrixGT G, as product of two invertible matrices, is also invertible.2

Next let us consider the system (4.10).

Theorem 4.3. The solution λ of (4.10) is unique up to addition of a vector fromker BT . Any solution λ of (4.10) yields the same solution u of the minimizationproblem (4.3), using (4.8) with α = −(GT G)−1GT (d− Fλ).

Proof. See [26]. 2

The original FETI algorithm, which is an application of preconditioned CG forsolving the equation PFλ = d, using a symmetric preconditioner D, can be writtenas follows.

Algorithm 4. (FETI)Given an initial λ0, compute the initial estimate

λ0 = G(GT G)−1e + Pλ0

Page 47: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 4. FINITE ELEMENT TEARING AND INTERCONNECTING METHOD46

and the initial residualr0 = P (Fλ0 − d).

Repeat for k = 1, 2, ... until convergence:

zk−1 = Drk−1

yk−1 = Pzk−1

ξk = rTk−1yk−1

pk = yk−1 +ξk

xik−1

pk−1 (p1 = y0)

µk =ξk

pTk PFpk

λk = λk−1 + µkpk

rk = rk−1 + µkPFpk

So, we see that in the original FETI algorithm the first equation of (4.10) issolved by a preconditioned conjugate gradient method using an initial approxima-tion λ0 such that it satisfies the second equation. For the conjugate gradient methodit is required to evaluate the actions of PF . Since F = BK+BT , most of the com-putational work is concentrated in evaluation of K+. On the other hand, K+ is ablock diagonal matrix, therefore its action can be computed in parallel which in-volves solving subdomain problems only. Application of P leads to solving a smallcoarse problem. For a scalar problem the size of the coarse problem correspondingto P is less than the number of subdomains Ns.

Another approach could be to solve the system (4.6) using HSS iterative method.For this let us rewrite the system (4.6) as a saddle-point system:

K BT

B O

u

λ

=

f

0

(4.11)

In fact, as it has been mentioned several times, this is the main interest of ourwork. So, in the next chapter we wil apply HSS method to the FETI system (4.11)for our model problem.

Page 48: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

Chapter 5

HSS applied to FETI system

In the previous two chapters we introduced the HSS iterative method as a potentialsolver for saddle-point problems and also we looked at saddle-point formulationof the well known domain decomposition method FETI. In this chapter we willfirst apply FETI domain decomposition method to our model problem and thensolve the resulting saddle-point system using HHS iterative method. We will useHSS method both as a stationary iterative method and also as a preconditionerfor Krylov subspace method GMRES and the potential of this approach will beillustrated.

5.1 Numerical results

Let us recall that our model problem is (2.1):

−4u(x, y) = f(x, y) in Ω = (0, 2)× (0, 1),

u(x, y) = g(x, y) on ∂Ω.(5.1)

Let us apply FETI method to our model problem. For this consider the simplestcase, when the domain Ω is divided into two subdomains Ω1 and Ω2 such thatΩ1 ∩ Ω2 = ∅ and ∂Ω1 ∩ ∂Ω2 = Γ. As it has been described in Chapter 3, forconstructing FETI system, we need to compute the stiffness matrices Ki and theload vectors fi for each subdomain Ωi , i ∈ 1, 2. In order to enforce the continuitycondition we introduce the Lagrangian multiplier λ. Then the system to be solved

47

Page 49: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 5. HSS APPLIED TO FETI SYSTEM 48

Figure 5.1: Tearing of unknowns on the interface

Page 50: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 5. HSS APPLIED TO FETI SYSTEM 49

is:

K BT

B O

u

λ

=

f

0

(5.2)

orAx = b (5.3)

with

K =

K1 O

O K2

, f =

f1

f2

, u =

u1

u2

,

A =

K BT

B O

, x =

u

λ

, b =

f

0

.

Recall that matrix B (constructed form 0, 1,−1 ) is such that the constraintBu = 0 provides the continuity of the solution u through the interface Γ. For thecase of two subdomains one can show that, with certain numbering of nodal points,B is such that BBT is a diagonal matrix, more precisely

BBT = 2I.

If we recall the HSS algorithm for saddle-point problems, such a structure of thematrix BBT is quite advantageous, which means that the two subsystems of HSSiteration can be solved quite accurate.In our numerical experiments we used as a Dirichlet data and as a right hand sidethe functions

g(x, y) = x + y + 1 and f(x, y) = 0.

It can be easily seen that the exact solution of the model problem (2.1) is theharmonic function u(x, y) = 1 + x + y. We applied the HSS iterative scheme (3.17)for our model problem and also we used the HSS iterative method as a preconditionerfor GMRES.In Figure 5.2 we display the spectral radius of the iteration matrix Mα in the caseof h = 1

9for different values of α. So, if we define by αopt the value of α that

minimizes the spectral radius, then, as we can see from the figure, αopt = 1.20 and

ρ(Mα) = 0.77.

Now, we have compared the number of HSS as an iterative solver, GMRESwithout preconditioner and GMRES with the HSS iterative scheme as a precon-ditioner. The results of the comparison are summarized in Table 5.1. In all our

Page 51: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 5. HSS APPLIED TO FETI SYSTEM 50

Figure 5.2: Spectral radius of iteration matrix ρ(Mα) for different values of α(h = 1

9)

Figure 5.3: The spectrum of iteration matrix for optimal value of α

Page 52: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 5. HSS APPLIED TO FETI SYSTEM 51

Table 5.1: Comparison of HSS iterative scheme, GMRES without preconditioning,GMRES with HSS iterative scheme as a preconditioner

h Iterative GMRES Preconditioned GMRES1 7 1 1 (inner 3)

1/2 13 7 4 (inner 3)1/4 13 18 5 (inner 3)1/8 34 30 7 (inner 3)1/16 98 64 10(inner 3)

runs we used zero initial guess and the iteration is stopped when the relative resid-ual has been reduced by at least four orders of magnitude, which means when||b−Ax|| ≤ 10−4||b||. For the inner HSS iteration for preconditioned GMRES weused the accuracy ε = 10−1. As it can be seen form the table, the HSS iterativescheme as a preconditioner for GMRES remarkably reduces the number of itera-tions. For instance for h = 1

17the preconditioning halves the number of iterations.

We have also tried to change the accuracy of the inner iteration for preconditionedGMRES. In the Table 5.2 we have presented the results.

Table 5.2: Number of iterations for preconditioned GMRES for different accuraciesof inner HSS iteration

h ε = 10−1 ε = 10−2 ε = 10−3

1 1 (inner 3) 1(inner 4) 1(inner 6)1/2 4 (inner 3) 3(inner 6) 3(inner 9)1/4 5 (inner 3) 3(inner 6) 3(inner 9)1/8 7 (inner 3) 3(inner12) 3(inner 23)1/16 10(inner 3) 5(inner 21) 3(inner 56)

We can observe that by making the inner iteration more accurate the outer iter-ation becomes independent of h. Nevertheless, it is obvious that the inner accuracyε = 10−1 corresponds to the least number of total iterations.

Page 53: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

Chapter 6

Other saddle-point problems

Saddle-point systems arise in many scientific and engineering applications, includingcomputational fluid dynamics [15], [16], [24], [17], mixed finite element approxima-tion of elliptic PDE’s [18], [19], [25] and optimization [20], [23], [22], [21]. In thischapter we will present some more examples of problems leading to saddle-pointsystems. We will also consider the Boundary Element Method, which, as we willsee, gives rise to saddle-point problem as well. We suggest HSS iterative method asa potential method for solving this problems.

6.1 Mixed Formulations of 2nd order elliptic prob-

lems

6.1.1 Linear elliptic problems

Let us consider the following problem

div(A(x)∇u) = f in Ω ⊂ Rd,

u = gD on ΓD,

(A(x)∇u) · n = gN on ΓN ,

(6.1)

where

• ΓD ∩ ΓN = ∅ and ΓD ∪ ΓN = ∂Ω,

52

Page 54: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 6. OTHER SADDLE-POINT PROBLEMS 53

• A(x) is a smooth function on Ω and A(x) ≥ a > 0 for all x ∈ Ω,

• n is the outward normal to ∂Ω,

• f, gD and gN are given smooth functions in Ω , ΓD and ΓN respectively.

Consider the manifold Vg ⊂ H1(Ω) and V0 ⊂ H1(Ω) defined as:

Vg = v : v ∈ H1(Ω) and v = gD on ΓD,

V0 = v : v ∈ H1(Ω) and v = 0 on ΓD.If we now multiply the first equation in (6.1) with a function v ∈ V0 and integrateover Ω we will get:

Ω

div(A(x)∇u)vdx =

Ω

fvdx ∀v ∈ V0.

Using the Green’s formula and taking into account the third equation in (6.1), wewill get ∫

Ω

A(x)∇u · ∇vdx = −∫

Ω

fvdx +

ΓN

gNvdsx ∀v ∈ V0.

Note that for the formulation above we can relax the conditions on A(x), requiringonly A(x) ∈ L∞ and a ≥ A(x) ≥ a

¯> 0 for almost all x ∈ Ω. So, the primal

variational problem can be formulated as:

Find u ∈ Vg such that

Ω

A(x)∇u · ∇vdx = −∫

Ω

fvdx +

ΓN

gNvdsx ∀v ∈ V0. (6.2)

In order to get the mixed formulation of (6.1) we introduce the variable:

p = A∇u in Ω. (6.3)

Then the first and third equations of (6.1) will become, respectively

divp = f in Ω, (6.4)

andp · n = gN on ΓN . (6.5)

It is now possible to give two reasonable variational formulations for (6.2) - (6.5).

Page 55: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 6. OTHER SADDLE-POINT PROBLEMS 54

• first variational formulation

Find u ∈ Vg and p ∈ (L2(Ω)

)dsuch that:

Ω

(A(x))−1p · qdx−∫

Ω

q · ∇udx = 0 ∀q ∈ (L2(Ω)

)d,

−∫

Ω

p · ∇vdx =

Ω

fvdx−∫

ΓN

gNvdsx ∀v ∈ V0.

(6.6)

For introducing the second variational formulation we consider the space H0(div; Ω)and the manifold HgN

(div; Ω) defined by:

H0(div; Ω) = q : q ∈ (L2(Ω)

)d; divq ∈ L2(Ω); q · n = 0 on ΓN

and

HgN(div; Ω) = q : q ∈ (

L2(Ω))d

; divq ∈ L2(Ω); q · n = gN on ΓN.

Now, the second variational formulation can be formulated.

• second variational formulation

Find u ∈ L2(Ω) and p ∈ HgN(div; Ω) such that:

Ω

(A(x))−1p · qdx +

Ω

udivqdx =

ΓD

gDq · ndsx ∀q ∈ H0(div; Ω),

Ω

vdivpdx =

Ω

fvdx ∀v ∈ L2(Ω).

(6.7)

The difference between the two variational formulations is simply in using theGreen’s formula (or integration by parts formula). Nevertheless, the regularityrequired for u and p is interchanged. So, for discretizing the first formulation oneneeds to use continuous finite elements for u and can use discontinuous finite ele-ments for p. On the other hand, for discretizing the second formulation one canuse discontinuous finite elements for u but the finite elements for p have to be suchthat divp ∈ L2(Ω). Also we should mention that another difference between the twoformulations is in the treatment of essential and natural boundary conditions. In

Page 56: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 6. OTHER SADDLE-POINT PROBLEMS 55

general the primal formulation is simpler, as it involves only one variable, and thereare many robust methods based on this approximation. But in many applicationsthe second variable p is the more relevant physical variable. In these cases the mixedformulation is preferred since very often it provides better accuracy for p.If we now consider the second formulation and introduce the following notations:

• a(p, q) =

Ω

(A(x))−1p · qdx and b(v, p) =

Ω

vdivpdx,

• 〈F, v〉 =

Ω

fvdx and 〈G, q〉 =

ΓD

gDq · ndsx,

then the variational formulation can be reformulated in the following way:

Find u ∈ L2(Ω) and p ∈ HgN(div; Ω) such that:

a(p, q) + b(u, q) = 〈F, v〉 ∀q ∈ H0(div; Ω)

b(v, p) = 〈G, q〉 ∀v ∈ L2(Ω)(6.8)

6.1.2 Linear elasticity problem

For vector valued function v(x) we define by ε(v) = [εij] the following second ordertensor:

εij =1

2(∂vi

∂xj

+∂vj

∂xi

) i, j = 1, ..., d. (6.9)

The linear elasticity equations are given by:

σ = E : ε(u) in Ω (σij =d∑

l=1

d∑m=1

Eijlmεlm(u))

divσ = f in Ω

(6.10)

If we substitute σ = E : ε(u) and εij =1

2(∂ui

∂xj

+∂uj

∂xi

) in the second equation of

(6.10), we will get a second order elliptic system with the unknown u. In (6.10) theforth order tensor E is called elasticity tensor, which we will assume to be ellipticand have constant coefficients. We will denote the inverse tensor of E by C. Thismeans

τ = E : ε(v) ⇐⇒ ε(v) = C : τ.

Page 57: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 6. OTHER SADDLE-POINT PROBLEMS 56

Again, for simplicity, we will assume homogenous Dirichlet boundary condition:

u = 0 on ∂Ω.

Consider

Hs(div; Ω) = τ : τ ∈ (L2(Ω))d2

; τij = τji ∀i, j; divτ ∈ (L2(Ω))d.

Now we can formulate the mixed variational formulation.

Find u ∈ (L2(Ω))d2and σ ∈ Hs(div; Ω) such that

Ω

(C : σ) : τdx +

Ω

u · divτdx = 0 ∀τ ∈ Hs(div; Ω)

Ω

v · divσdx =

Ω

f · v ∀v ∈ (L2(Ω))d2

(6.11)

By introducing the following notations

• a(σ, τ) =

Ω

(C : σ) : τdx and b(u, τ) =

Ω

u · divτdx,

• 〈F, v〉 =

Ω

fvdx,

the mixed variational formulation can be rewritten as:

a(σ, τ) + b(u, τ) = 0 ∀τ ∈ Hs(div; Ω)

b(v, σ) = 〈F, q〉 ∀v ∈ (L2(Ω))d2

(6.12)

These mixed formulations are due to Hellinger and Reissner, and, sometimes, theprocedure of derivation the mixed formulation is called Hellinger-Reissner principle.

6.1.3 Stokes problem

Next example we will consider is the Stokes equations for modeling the flow ofincompressible fluids. The Stokes equations are:

−ν4u +∇p = f in Ωdivu = 0 in Ω

(6.13)

Page 58: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 6. OTHER SADDLE-POINT PROBLEMS 57

Many kinds of boundary conditions can be used with these equations but forsimplicity we will consider only the homogenous Dirichlet boundary condition, theso called no-slip condition:

u = 0 on ∂Ω

As usual, in order to get the variational formulation we need to multiply the govern-ing equations by the test functions and integrate over the domain Ω. By doing thisand using the formula of integration by parts, we will get the following variationalformulation.

Find u ∈ (H10(Ω))d and p ∈ L2(Ω) such that:

Ω

ν∇u : ∇vdx−∫

Ω

pdivvdx =

Ω

f · vdx ∀v ∈ (H10(Ω))d

Ω

pdivvdx = 0 ∀q ∈ L2(Ω) \ R(6.14)

where A : B is the direct matrix product defined as [A : B]ij = AijBij. If we nowintroduce the following notations:

• a(u, v) =

Ω

ν∇u : ∇vdx and b(q, v) =

Ω

qdivvdx,

• 〈F, v〉 =

Ω

fvdx,

then the variation formulation can be rewritten as:

Find u ∈ (H10(Ω))d and p ∈ L2(Ω) \ R such that:

a(u, v)− b(p, v) = 〈F, v〉 ∀v ∈ (H10(Ω))d

b(q, u) = 0 ∀q ∈ L2(Ω)(6.15)

6.2 Boundary Element Method

Boundary Element Method (BEM) is a numerical computational method of solvinglinear partial differential equations which have been formulated as integral equa-tions. BEM attempts to use the given boundary conditions to fit boundary valuesinto the integral equation, rather than values throughout the space defined by a

Page 59: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 6. OTHER SADDLE-POINT PROBLEMS 58

PDE. Once this is done, the integral representation can then be used again to cal-culate numerically the solution directly at any desired point in the interior of thesolution domain. Very often BEM is more efficient than other methods, for in-stance finite elements, in terms of computational resources. Conceptually, it worksby constructing a ”mesh” over the surface of the solution domain. However, formany problems BEM is significantly less efficient than volume-discretisation meth-ods (FEM, FDM, FVM ). Boundary element formulations typically result in fullypopulated matrices. This means that the storage requirements and computationaltime will grow according to the square of the number of unknowns. By contrast, fi-nite element matrices are sparse (since the elements are only locally connected) andthe storage requirements for the system matrices typically grow linearly with theproblem size. Another requirement for BEM is that it can be applied to problemsfor which the fundamental solution can be calculated. But once the fundamentalsolution is known, and the boundary values are also approximated the solution inthe interior of the domain is calculated very accurately, due to the representationformula. For introducing the technique of boundary element methods we will needsome definitions.

Consider Ω ⊂ Rd. Define C∞0 (Ω) as:

C∞0 (Ω) := φ ∈ C∞(Rd) : φ has compact support.

We will say that the sequence φn ∈ C∞0 (Ω) converges to 0 if there exists a compact

subset K ⊂ Ω such that

• φn(x) = 0 ∀x ∈ K, ∀n ∈ N,

• ∂αφn ⇒ 0 (uniformly converges to 0) on K.

We denote by D the space C∞0 (Ω) equipped with the topology described above.

The space of continuous linear functionals on D is the dual space D′, which is called

distribution space. Next, we need the concept of the Dirac delta function, which isdefined as a functional from D′

, such that

〈δ(x), φ(x)〉D′×D = φ(0) ∀φ ∈ D.

Definition 6.1. The fundamental solution of some scalar elliptic partial differentialoperator Lx is denoted by E(x, y) and defined by the following relation:

LxE(x, y) = δ(x− y) in D′(Ω), Ω ⊂ (Rd)

or

〈LxE(x, y), φ(x)〉D′×D = 〈δ(x− y), φ(x)〉 := φ(y) for every φ ∈ D(Ω),

where y is a parameter.

Page 60: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 6. OTHER SADDLE-POINT PROBLEMS 59

For instance the fundamental solution for he Laplace operator in d dimensionalspace (d = 1, 2, 3) is given by [5]:

E(x, y) =

12(1− |x− y|) for d = 1

− 12π

log |x− y| for d = 2

14π

1

|x− y| for d = 3

Lemma 6.2. Let us assume that Ω ⊂ (Rd) is bounded and ∂Ω := Γ ∈ C0,1 ∩ PC1 (piecewise smooth). Then the following Green’s formulas are valid:

• First Green’s formula. ∀u ∈ W1p(Ω),∀v ∈ W2

q(Ω) with 1p

+ 1q

= 1. it holds:

Ω

u(x)4v(x)dx =

Γ

u(x)∂v

∂nx

(x)dsx −∫

Ω

∇u(x)∇v(x)dx (6.16)

• Second Green’s formula. ∀u, v ∈ W22 = H2 it holds:

Ω

(u(x)4v(x)− v(x)4u(x))dx =

Γ

(u(x)∂v

∂nx

(x)− v(x)∂u

∂nx

(x))dsx (6.17)

• Third Green’s formula or Representation formula: if we take v(·) = E(·, y)in the second Green’s formula and if we take the traces on Γ, then we obtain[5]:

σ(y)u(y) = −∫

Γ

∂E

∂nx

(x, y)u(x)dsx+

Γ

E(x, y)∂u

∂nx

(x)dsx+

Ω

E(x, y)(−4u(x))dx

(6.18)with

σ(y) :=

0 if y ∈ Rd \ Ω

1

2almost everywhere on Γ

1 if y ∈ Ω

Note that once we know the Cauchy data u(x)/Γ,∂u

∂nx

(x)/Γ and −4u on Ω,

then using the representation formula we can calculate the value of the function uat any point in Ω. This gives the motivation to the Boundary Element Method.

Page 61: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 6. OTHER SADDLE-POINT PROBLEMS 60

In order to see how it works, let us consider the following mixed boundary valueproblem:

−4u(x) = 0 in Ω,

u(x) = gD(x) on ΓD(Dirichlet boundary),

∂u

∂nx

(x) = gN(x) on ΓN(Neumann boundary),

(6.19)

where Ω ⊂ R2 and ΓD∪ΓN = Γ := ∂Ω , ΓD∩ΓN = ∅. We assume that the boundaryΓ has 1-periodic parameter representation. Namely, we assume that there exists 1-periodic function x(t) such that x(0) = x(1), |x(t)| ≥ κ > 0, and

Γ = x = x(t) = [x1(t), x2(t)]T ∈ R2 : 0 < t ≤ 1.

Now, if we rewrite the Green’s third formula and take into account that for ourproblem −4u(x) = 0, then we will get the following identity:

Γ

E(x, y)v(x)dsx =1

2u(y) +

Γ

∂E

∂nx

(x, y)u(x)dsx ∀y ∈ Γ (6.20)

where we denote by v(x) :=∂u(x)

∂nx

the Neumann data. Let us now ’formally’ take

the derivative of the (6.20) in the direction of ny. We will get

1

2

∂u(y)

∂ny

= − ∂

∂ny

Γ

∂E(x, y)

∂nx

u(x)dsx +

Γ

∂E

∂ny

(x, y)v(x)dsx. (6.21)

Next, we introduce the following integral operators (BIO’s):

• Single layer potential operator V : V v(y) :=

Γ

E(x, y)v(x)dsx,

• Double layer potential operator K: Ku(y) :=

Γ

∂E

∂nx

(x, y)u(x)dsx,

• Adjoint double later potential operator K∗ : K∗v(y) :=

Γ

∂E

∂ny

(x, y)v(x)dsx,

• Hyper singular potential operator D: Du(y) := − ∂

∂ny

Γ

∂E(x, y)

∂nx

u(x)dsx.

Page 62: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 6. OTHER SADDLE-POINT PROBLEMS 61

Both in (6.20) and (6.21) the integrals over Γ we can split up into the sum oftwo integrals: ∫

Γ

=

ΓD

+

Γn

.

This will result in:

1

2u(y) =

ΓD

E(x, y)v(x)dsx +

ΓN

E(x, y)v(x)dsx

−∫

ΓD

∂E

∂nx

(x, y)u(x)dsx −∫

ΓN

∂E

∂nx

(x, y)u(x)dsx

(6.22)

and

1

2v(y) = − ∂

∂ny

ΓD

∂E(x, y)

∂nx

u(x)dsx − ∂

∂ny

ΓN

∂E(x, y)

∂nx

u(x)dsx

+

ΓD

∂E

∂ny

(x, y)v(x)dsx +

ΓN

∂E

∂ny

(x, y)v(x)dsx.

(6.23)

One can show that the boundary operators we have presented have the followingproperties [5]:

• V = V ∗ is self adjoint in H− 12 (Γ),

• D = D∗ is self adjoint in H12 (Γ),

• K∗ is adjoint to K in L2(Γ),

• D is positive semidefinite on the space H12 (Γ) and positive definite on H

12 (Γ)/ker D,

means there exists a constant µD > 0 such that

〈Du, u〉H− 1

2×H12≥ µD||u||2

H12∀u ∈ H

12 (Γ)/ker D

• if the diameter of Ω is small enough, namely, diam(Ω) < 1, then V is positive

definite on the space H− 12 (Γ), means there exists a constant µV > 0 such that:

〈v, V v〉H− 1

2×H− 12 (Γ)

≥ µV||v||2H

12∀v ∈ H

12 (Γ)

Page 63: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 6. OTHER SADDLE-POINT PROBLEMS 62

We skip the technical details which can be found in [5] and present only thefinal result of Galerkin approximation of (6.22)- (6.23). So, taking into accountthe properties of boundary integral operators we have introduced and following thegeneral procedure of Galerkin approximation scheme, we will get a saddle-pointsystem of the following form:

[Vh −Kh

KTh Dh

] [vu

]=

[fg

](6.24)

Since our purpose is to stress that the resulting system is a saddle-point system,we will not detail in the derivation of this system and the exact expressions for blockmatrices Vh, Kh, Dh as well as the right hand side vectors f and g. What is morerelevant for us in here is that the Boundary Method Technique is one of the methodwhich lead to saddle-point problems.

6.3 Analysis and numerics for Mixed Variational

Problems

In the previous section we have considered some examples of mixed variational for-mulations. As we have mentioned, in some cases mixed formulations are preferredto primal formulation. For some cases this comes from physical reasons. For in-stance, in case of linear elasticity, the stresses are more relevant unknowns thanthe displacements. In some other cases the reason for using the mixed formulationof a particular problem lies on algebraic level. For instance, in the same linearelasticity problem, for isotropic materials, the finite element disretization leads to aill-conditioned matrix. In this section we will consider the mixed variational prob-lems in quite general framework. Let us first introduce some notations. Let X andΛ be Hilbert spaces with inner products (·, ·)X , (·, ·)Λ and induced norms || · ||X ,|| · ||Λ respectively. The dual spaces of X and Λ will be denoted by X∗ and Λ∗ withduality products 〈·, ·〉X∗×X and 〈·, ·〉Λ∗×Λ. All the examples we have considered sofar are problems of this type:

Find (u, λ) ∈ X × Λ such that

a(u, v) + b(v, λ) = 〈F, v〉 ∀v ∈ X

b(u, µ) = 〈G, v〉 ∀µ ∈ Λ(6.25)

with F ∈ X∗ and G ∈ Λ∗ given linear continuous functionals. We also define thefollowing operators:

Page 64: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 6. OTHER SADDLE-POINT PROBLEMS 63

• A : X → X∗ : 〈Au, v〉X∗×X := a(u, v) ∀u, v ∈ X

• B : X → Λ∗ : 〈Bu, λ〉Λ∗×Λ := b(u, λ) ∀u ∈ X, ∀λ ∈ Λ

Then in the operator notations (6.25) can be reformulated as:

Find (u, λ) ∈ X × Λ such that:

Au + B∗λ = f in X∗

Bu = g in Λ∗(6.26)

where the operator B∗ : Λ → X∗ is the adjoint operator of B, uniquely defined bythe relation

〈B∗λ, u〉 := 〈Bu, λ〉 = b(u, λ) ∀u ∈ X, ∀λ ∈ Λ.

LetV0 := v ∈ X : b(v, µ) = 0 ∀µ ∈ Λ

andVg := v ∈ X : b(v, µ) = 〈G,µ〉 ∀µ ∈ Λ.

In here the notations V0 and Vg should not be confused with the same notations inprevious chapters.

6.3.1 Brezzi’s theorem

Now, we will state the fundamental theorem of mixed variational problems, whichis called Brezzi’s theorem [18].

Theorem 6.3. Let the following assumptions are fulfilled

• F ∈ X∗ and G ∈ Λ∗ are given

• there exists α2 > 0 such that

|a(u, v)| ≤ α2||u||X ||v||X , ∀u, v ∈ X

• there exists β2 > 0 such that

|b(u, µ)| ≤ β2||u||X ||µ||Λ ∀u ∈ X, ∀µ ∈ Λ

Page 65: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 6. OTHER SADDLE-POINT PROBLEMS 64

• there exists β1 > 0 such that the so called LBB hold true:

supv∈X

b(v, µ)

||v||X ≥ β1||µ|| ∀µ ∈ Λ

• a(·, ·) is elliptic on ker B, namely, there exists α1 such that

a(u, v) = 〈Au, v〉 ≥ α1||v||2X ∀v ∈ ker B = V0

Then there exists unique solution (u, λ) such that

a(u, v) + b(v, λ) = 〈F, v〉 ∀v ∈ X

b(u, µ) = 〈G, v〉 ∀µ ∈ Λ

or, equivalently Au + B∗λ = f in X∗

Bu = g in Λ∗

Moreover, the following apriori estimates for the solution are known:

||u||X ≤ 1

α1

||F ||X∗ +1

β1

(1 +α2

α1

)||G||Λ∗

||λ||Λ ≤ 1

β1

(1 +α2

α1

)||F ||X∗ +α2

β21

(1 +α2

α1

)||G||Λ∗(6.27)

One can show that for all the examples we have considered the conditions ofBrezzi’s theorem hold true. So, the continuous mixed variational problems for theseexamples are well posed.

6.3.2 Mixed Finite Element Approximation

Let Xh := spanp(i); i = 1, 2, ..., nh ⊂ X and Λh := spanq(i); i = 1, 2, ..., mh ⊂ Λbe finite dimensional subspaces of X and Λ respectively. Then the Galerkin approx-imation to (6.25) reads as follows:

Find (uh, λh) ∈ Xh × Λh such that

a(uh, vh) + b(vh, λh) = 〈F, vh〉 ∀vh ∈ Xh

b(uh, µh) = 〈G, vh〉 ∀µh ∈ Λh

(6.28)

Page 66: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 6. OTHER SADDLE-POINT PROBLEMS 65

Using the representation of uh by the basis functions in Xh and the representationof λ by the basis functions in Λh :

uh =

nh∑i=1

u(i)p(i) and λh =

mh∑i=1

λ(i)q(i)

we will get:

Find uh = [u(i)]nhi=1 ∈ Rnh and λh = [λ(i)]mh

i=1 ∈ Rmh such that

Ah BTh

Bh O

uh

λh

=

fh

gh

(6.29)

wherefh = [〈F, p(k)〉]k ∈ Rnh and gh = [〈G, q(k)〉]k ∈ Rmh

Ah = [a(p(i), p(k))]ik ∈ Rnh×nh

Bh = [b(p(i), p(j))]ij ∈ Rnh×mh

LetV0h := vh ∈ Xh : b(vh, µh) = 0 ∀µh ∈ Λh

andVgh := vh ∈ Xh : b(vh, µh) = 〈G,µh〉 ∀µh ∈ Λh.

In general if the continuous problem satisfies the conditions of Brezzi’s theorem thediscrete problem does not necessarily satisfy the same conditions. The reasons forthis is that in general

Xh ⊂ X and Λh ∈ Λ ; V0h ⊂ Vh and Vgh ⊂ Vg.

So, if V0h ⊂ Vh and Vgh ⊂ Vg (which guarantees that the discrete LBB conditionand the V0h-ellipticity of a(·, ·) are satisfied), then the system (6.29) has uniquesolution. Moreover, for the solution we have the following result:

Theorem 6.4. Let

• F ∈ X∗ and G ∈ Λ∗ are given

• there exists α2 > 0 such that

|a(u, v)| ≤ α2||u||X ||v||X , ∀u, v ∈ X

Page 67: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 6. OTHER SADDLE-POINT PROBLEMS 66

• there exists β2 > 0 such that

|b(u, µ)| ≤ β2||u||X ||µ||Λ ∀u ∈ X, ∀µinΛ

• there exists β1 > 0 such that the so called LBB condition holds true:

supv∈X

b(v, µ)

||v||X ≥ ||µ|| ∀µ ∈ Λ

• a(·, ·) is elliptic on ker B, namely, there exists α1 such that

a(u, v) = 〈Au, v〉 ≥ α1||v||2X ∀v ∈ ker B = V0

• there exists β1h > 0 such that the discrete LBB condition holds true:

supvh∈Xh

b(vh, µh)

||vh||X ≥ ||µh|| ∀µh ∈ Λh

• a(·, ·) is elliptic on V0h, namely, there exists α1h such that

a(uh, vh) = 〈Auh, vh〉 ≥ α1h||vh||2X ∀vh ∈ V0h

Then there exists a constant C > 0 such that C 6= C(h) (does not depend on h) and

||u− uh||X + ||λ− λh||Λ ≤ C( infwh∈Xh

)||u− wh||X + infγh∈Λh

)||λ− γh||Λ

Now, the next step to be done is to solve the system (6.29). We can see that thesystem is a saddle-point system and one can use the HSS iterative method to solvesuch a system.

Page 68: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

Chapter 7

Conclusion

We started this thesis by looking at the main steps of solving elliptic PDEs. Wemotivated our topic by considering a model problem, which is a representative ofelliptic PDEs. We showed the well possedness of our model problem, namely, theexistence of solution, its uniqueness and stability. This gave us the background forsolving the problem.From infinite dimensional space we transferred our problem to finite dimensionalspace. So, the continuous problem was then replaced by the discrete problem. Dueto the choice of finite dimensional subspace of our original space, the solution ofdiscrete problem approximates the solution of continuous problem quite accurate.As a result of discretization, by using Finite Element Tearing and Interconnectingmethod we arrived at a saddle-point problem. And, as a last step in solving ourmodel problem, we applied the Hermitian and skew-Hermitian iterative method forsolving the resulting saddle-point problem.We used the HSS method first as a stationary iterative method and we saw that thenumber of iterations depends on the discretization parameter h. Then we appliedHSS method as a preconditioner for Krylov subspace method GMRES. We alsoapplied to our saddle-point system GMRES without preconditioner and we com-pared the results. What we observed was that the HSS preconditioning remarkablyreduces the number of GMRES iterations. So, though the HSS iterative method asa iterative scheme itself does not converge rapidly, as a preconditioner for GMRESit worked quite well. Of course, due to some limitations we have restrained our-selves, the investigation we have started can be further continued. For instance, asa first further consideration we can increase the number of subdomains for FETImethod (H) and look at the behavior of the HSS iteration for increasing number ofsubdomains of constant size. This is done, for instance, in [31] for linear elasticityproblem. Another step in our investigation can be considering a 3D model problem

67

Page 69: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

CHAPTER 7. CONCLUSION 68

instead of 2D problem. A very important consideration can be a comparison of HSSiterative method with other solvers, for example block-structured preconditionerscombined with Krylov subspace methods.

Page 70: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

Bibliography

[1] L.C. Evans, Partial Differential Equation, American Mathematical Society,1998.

[2] Z.Z. Bai, G.H. Golub, M.K. Ng, Hermitian and skew-Hermitian splittingmethods for non-Hermitian positive definite linear system, SIAM J. MatrixAnal.Appl., 24(2003), pp.603-626.

[3] D. Gilbarg, N. S. Trudinger, Elliptic Partial Differential Equations of SecondOrder, Springer (2001).

[4] G. D. Smith, Numerical Solution of Partial Differential Equations: FiniteDifference Method, Clarendon Press (1985).

[5] S. Rjasanow, O. Steinbach, The Fast Solution of Boundary Integral Equations,Springer (2007).

[6] Y. Saad, Iterative methods for sparse linear systems, SIAM (2003).

[7] T.A. Davis, Direct Methods for Sparse Linear Systems (Fundamentals of Algo-rithms), Society for Industrial and Applied Mathematic.

[8] G.H. Golub, A.J. Wathen, Matrix computations, 3rd ed., The Johns HopkinsUiversity Press, Baltimore, 1996.

[9] D. Braess, Theory, Fast solvers and Application in Solid Mechanics, CambridgeUniversity Press (2002).

[10] C. Fahrat, F.-X. Roux, A method of finite element tearing and intercon-necting and its parallel solution algorithm, Internat.J.Numer.Methods Engrg.,32(1991), pp.1205 -1227.

[11] M. Benzi, G.H. Golub, J. Liesen, Numerical solution of saddle point problems,Acta Numerica (2005), pp. 1137.

69

Page 71: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

BIBLIOGRAPHY 70

[12] A. Toselli, O.B. Widlund, Domain Decomposition Methods - Algorithms andTheory, Springer (2004).

[13] K.C. Park, M.R. Justiono, Jr. and C.A. Felippa, An algebraically partitionedFETI method for parallel structural analysis: Algorithm discription, Internat.J. for Numerical methods in Engineering, vol. 40 , pp. 2717-2737 (1997).

[14] Z.Z. Bai, G.H. Golub, Accelerated Hermitian and skew-Hermitian splitting iter-ation methods for saddle-point problems, IMA Journal of Numerical Analysis.

[15] H.C. Elman, Preconditioners for saddle point problems arising in computationalfluid dynamics, Appl.Numer.Math.,43(2002), pp.75-89.

[16] H.C. Elman, D.J. Silvester, A.J. Wathen, Performance and analysis of sad-dle point preconditioners for the discrete steady-state Navier-Stokes equations,Numer.Math.,90 (2002), pp.665-688.

[17] M. Fortin, R. Glowinski, Augmented Lagrangian Methods: Application to theNumerical Solution of Boundary-Value Problems, Stud.Math.Appl.15, North-Holland, Amsterdam, 1983.

[18] F. Brezzi, M. Fortin, Mixed and Hybrid Finite Element Methods, Springer-Verlag, New-York, 1991.

[19] I. Perugia, V. Simoncini, Bolck-diagonal and indefinite symmetric precon-ditioners for mixed finite element formulations, Numer. Linear AlgebraAppl.,7(2000), pp. 585- 616.

[20] M. Benzi, Solution of equality constrained quadratic programming problems bya projection iterative method, Rend. mat. Appl., 13 (1993), pp.275-296.

[21] P.E. Gill, W. Murray, M.H. Wright, Practical Optimization, Academic Press,New-York, 1981.

[22] P.E. Gill, W. Murray, D.B. Poneceleon, M.A. Saunders, Preconditioners forindefinite systems arising in optimization, SIAM J. Matrix Anal. Appl., 13(1992), pp.292-311.

[23] N.I.M. Gould, M.E. Hribar, J. Nocedal, On the solution of equality constrainedquadratic programming problems arising in optimization, SIAM J. Sci.Comput.,23(2001), pp.1376-1395.

[24] R. Glowinski, Numerical Methods for Nonlinear Variational Problems,Springer-Verlag, New-York, 1984.

Page 72: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

BIBLIOGRAPHY 71

[25] F. Brezzi, A survey of mixed finite element methods, NASA Contractor Report,1987.

[26] R. Tezaur, Analysis of Lagrange multiplier based domain decomposition, PhDThesis, University Karlova, Chech Republic, 1998.

[27] J. Alberty, C. Carstensen, S.A. Funken, Remarks around 50 lines of Matlab:short finite element implementation, Numerical Algorithms 20 (1999) 117137.

[28] A. Greenbaum, Iterative methods for solving linear systems, SIAM FrontiersAppl. MAth. 17, Philadelphia 1997.

[29] M. Benzi, G.H. Golub, A preconditioner for generalized saddle point problems,SIAM J.Matrix Anal. Appl., Vol. 26 No. 1, pp. 20 -41.

[30] Z.Z. Bai, G.H. Golub, C.K. Li, Optimal parameter in Hermitian and skew-Hermitian splitting method for certain two-by-two block matrices, SIAMJ.Sci.Comput.,28,pp.583-603 (2006b).

[31] A. Klawonn, O.B. Widlund, A domain decomposition method with lagrangemultipliers and inexact solvers for linear elasticity, SIAM J.Sci.Comput. vol.22,No.2, pp.1199-1219.

[32] M. Bebendorf, Hierarchical Matrices, Springer-Verlag, 2008.

[33] O.A. Ladyschenskaja, Boundary Value problems of Mathematical Physiscs,Hauka, Moscow 1973 (in Russian).

[34] O. Steinbach, Numerical Approximation Methods for Elliptic Boundary ValueProblems: Finite and Boundary Elements, Springer-Verlag, 2008.

Page 73: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

List of Figures

5.1 Tearing of unknowns on the interface . . . . . . . . . . . . . . . . . 48

5.2 Spectral radius of iteration matrix ρ(Mα) for different values of α(h = 1

9) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5.3 The spectrum of iteration matrix for optimal value of α . . . . . . . 50

72

Page 74: Hermitian and skew-Hermitian Solvers and … · Abstract The aim of the master thesis is to apply the Hermitian and skew-Hermitian (HSS) iterative method and its inexact version to

List of Tables

5.1 Comparison of HSS iterative scheme, GMRES without precondition-ing, GMRES with HSS iterative scheme as a preconditioner . . . . . 51

5.2 Number of iterations for preconditioned GMRES for different accu-racies of inner HSS iteration . . . . . . . . . . . . . . . . . . . . . . 51

73