multigrid methods fengwei yang computing 2009… · difference discretization schemes, iterative...
TRANSCRIPT
Multigrid Methods
Fengwei YANG
Computing
2009/2010
The candidate confirms that the work submitted is their own and the appropriatecredit has been given
where reference has been made to the work of others.
I understand that failure to attribute material which is obtained from another source may be considered
as plagiarism.
Summary
This report describes the three concepts from Achi Brandt’s 1977 paper [3] on using the Multigrid
methods to solve boundary value problems. They are, first of all the standard Multigrid method on 1-D
linear boundary value problems; secondly the non-linear Multigrid method on1-D non-linear bound-
ary value problems; thirdly, the adaptive Multigrid method (non-linear Multigridmethod with MLAT)
on 1-D non-linear boundary value problems. Additionally, some general background knowledge and
other techniques are included in this report. These are a description of boundary value problems, finite
difference discretization schemes, iterative solution methods, the Full Multigrid (FMG) method and the
W-cycle strategy. This report demonstrates the use of each of these andoptimal solutions are obtained.
A number of possible extensions are given at the end of the report.
i
Acknowledgements
Most of all I’d like to thank my project supervisor Pete Jimack for his support, clear guidance,
excellent feedback and other additional help to an international student whose first language is not
English. And also my assessor Mark Walkley for his useful feedback and great advice. Finally, I also
would like to thank my family, especially my father, who provided this opportunity to meto study in
England, and also is willing to support his son another three-years of further study.
ii
Contents
1 Introduction 1
1.1 Boundary Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 1
1.2 Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Solution Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Multigrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.1 Achi Brandt’s 1977 Paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.2 Some Recent Multigrid Papers . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Overview of Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . 9
1.6 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Linear Multigrid 12
2.1 Description of Linear Multigrid Method . . . . . . . . . . . . . . . . . . . . . . . .. 12
2.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.1 Finite Difference Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.2 Two-grid Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.3 Multigrid Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.4 Validation Of Multigrid Solver . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
2.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.1 Finite Difference Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.2 Two-grid Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.3 Multigrid Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4.4 Multigrid Solver On A More General Problem . . . . . . . . . . . . . . . . . 24
iii
2.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
3 Non-Linear Multigrid 28
3.1 Description of Non-Linear Multigrid Method . . . . . . . . . . . . . . . . . . . .. . 28
3.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.1 Non-linear Finite Difference Method . . . . . . . . . . . . . . . . . . . . . . 30
3.2.2 Non-linear Two-grid Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2.3 Non-linear Multigrid Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2.4 Validation Of Non-linear Multigrid Solver . . . . . . . . . . . . . . . . . . . . 33
3.3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
3.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4.1 Non-linear Finite Difference Method . . . . . . . . . . . . . . . . . . . . . . 34
3.4.2 Non-linear Two-grid Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.4.3 Non-linear Multigrid Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4.4 Non-linear Multigrid Solver On A More General Problem . . . . . . . . . .. 38
3.5 FMG and W-cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39
3.6 Further Software Development . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . 42
3.7 Further Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42
3.7.1 FMG With Non-linear Multigrid Solver . . . . . . . . . . . . . . . . . . . . . 42
3.7.2 W-cycle On A Simplified Non-linear Problem . . . . . . . . . . . . . . . . . . 43
3.7.3 W-cycle On A More General Non-linear Problem . . . . . . . . . . . . . .. . 44
3.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
4 Adaptive Multigrid 48
4.1 Description of Adaptive Multigrid Method . . . . . . . . . . . . . . . . . . . . . .. . 48
4.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52
4.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
5 Conclusion 54
5.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54
iv
5.2 Extension To 2-D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56
5.2.1 2-D Boundary Value Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.2.2 2-D Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.2.3 2-D Solution Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.2.4 2-D Multigrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.3 Extension To More General Adaptive Multigrid . . . . . . . . . . . . . . . . .. . . . 59
Bibliography 61
A Personal Reflection 63
B Schedule 65
C Code Of Linear Guass-Seidel Solver 67
D Code Of Linear Two-grid Solver 70
E Code Of Linear Multigrid Solver 75
F Code Of Linear Multigrid Solver On A More General Problem 81
G Code Of Non-linear Gauss-Seidel Solver 87
H Code Of Non-linear Two-grid Solver 89
I Code Of Non-linear Multigrid-grid Solver 94
J Code Of Non-linear Multigrid Solver On A More General Problem 101
K Code Of FMG 108
L Code Of W-cycle 115
M Code Of W-cycle On A More General Problem 122
N Code Of Adaptive Multigrid Solver 129
v
Chapter 1
Introduction
This chapter introduces the general concept of the Multigrid method and several techniques that are
used by the Multigrid method, as well as the problems that Multigrid will be used to solver. Section
1.1 describes the two-point boundary value problem which is the problem that this project will focus
on. Section 1.2 and 1.3 then illustrate the techniques which are used in this project in order to apply
the Multigrid method to solve boundary value problems. Specifically, section 1.2discusses the role of
finite difference discretization schemes and section 1.3 focuses on iterative methods used as part of the
Multigrid method. Then, section 1.4 explains the Multigrid method itself, including its origin and recent
applications. Section 1.5 provides some detail of my project and how this report is structured. Finally,
Section 1.6 explains the methodology used in this project, and the programming language choices.
1.1 Boundary Value Problems
The general concept of a boundary value problem (BVP) is to find an unknown function depending upon
one coordinate variable (or two variables for 2-D), that satisfies a ordinary differential equation (or a
partial differential equation for 2-D) at every point within a closed region. Meanwhile, the unknown
function should also satisfy certain conditions on the boundaries of that region [9]. This project will
only consider second order problems that have Dirichlet boundary conditions throughout the boundary
1
region.
An important character in boundary value problems is termed as their equilibrium or steady-state
[9] nature. That is, time is not an independent variable. Hence problems in1-D are ordinary differential
equations (ODE) and problems in higher dimensions are partial differentialequations (PDE), typically
of elliptic type.
This project will only focus on solving boundary value ODEs, although extensions to elliptic PDEs
are relatively straightforward and will be discussed. However, there are other types of partial differential
equations that exist: these are parabolic and hyperbolic equations. These two types of partial differential
equations normally involve time as an additional independent variable [9], and this project will not
consider such equations.
A general form of ODE BVP would be the following:
ad2udx2 +b
dudx
+cu= f (x). (1.1)
Such an equation is the main focus of the first part of this project. It is clearthat this problem is in 1-D,
as it only involves the independent variable x; it is linear since all three coefficients (a, b and c) do not
depend upon u; and it is a second order problem.
For further simplicity, it is assumed to begin with that all coefficients stay constant. We note that
two further simplifications are made for some examples in this report: both coefficients b and c are
zero. However, even without such modification, the nature of this problemand the use of the Multigrid
method are not changed. For instance, if coefficients b and c are positive, with or without them would
hardly affect the way of the problem is solved. This is illustrated in section 2.4.
A complete description of the boundary value problem not only includes equation 1.1, but also
requires a closed region and boundary conditions. Such requirements must be given before the solution
method can be devised. Therefore, for this problem we also impose:
xle f t ≤ x≤ xright, u(xle f t) = ule f t
, u(xright) = uright, (1.2)
which fixes a domain and defines the boundary conditions. Figure 1.1 illustrates the 1-D boundary value
problem, with the domain and boundary conditions given.
2
Figure 1.1: 1-D Boundary Value Problem: Domain And Boundary Conditions
1.2 Discretization
The boundary value problem (equation 1.1 and 1.2) which was given in theprevious section is a contin-
uous problem. That is, on the domain (betweenxle f t andxright , xle f t 6= xright), there are infinite points.
One way to solve such a continuous problem computationally is, first of all, to approximate the do-
main by a finite number of points. Then the valueu at each of these points can be estimated. Such an
approximate method is called a discretization.
Although the use of discretizations to approximate continuous problems is common, it also gen-
erates errors between the exact and the approximate solutions [4]. This isone of two errors in the
approximation process (the second error, due to the solution method, will bedescribed in next section),
and is known as the discretization error.
In this project, we will use the central finite difference method as the discretization algorithm. The
central difference method approximates the value ofu at every internal node (excluding the boundary
points) and uses the values of both neighbours in the calculation of the derivatives at each node [4].
Specifically, if the central difference method is applied to equation 1.1, the discretization would be the
following:
aui−1−2ui +ui+1
(dx)2 +bui+1−ui−1
2dx+cui = fi . (1.3)
For each interior nodei = 1, ...,N−1, wheredx= (xright−xle f t)N , fi = f (xi) andN is the number of intervals
in the discretization. Figure 1.2 illustrates a discretization on this problem with 3 internal nodes (number
of intervalsN = 4).
Having obtained a discrete form of the problem, by using equation 1.3, the values of the internal
nodes must now be calculated. Furthermore, Multigrid method does not only stay in one discretization.
As the Multigrid method moves from level to level, it requires an application of thesame discretization
with different numbers of internal nodes. Therefore, the central finitedifference method will be repeated
3
Figure 1.2: Discretization of 1-D Boundary Value Problem
on each grid. For example, the Two-grid version requires two discretizations on two different grids.
There are various other algorithms for discretization that exist. Such as thefinite element method,
for example, this method treats the boundary conditions as integrals in a functional that is being mini-
mized, therefore can easily solve boundary conditions which involve derivatives and irregularly shaped
boundaries [4]. However, this discretization method is not considered atall in this project. The follow-
ing section introduces some of the possible solution methods that can be applied, including the Multigrid
method which is the subject of this project.
1.3 Solution Methods
After we apply the central finite difference method on the original problem (equation 1.1), there are
various methods that can be used to solve the system of discretization equations (equation 1.3). The first
choice discussed here is the Gaussian Elimination. The advantage of this methodis, unlike the iterative
methods, that the exact solution can be obtained (except for the effects of rounding) [4]. Additionally,
since our 1-D linear boundary value problem leads to a tridiagonal matrix, itis known that such a method
can obtain the exact solution from the discretization problem inO(n) [4]. Hence for 1-D problems, the
Gaussian Elimination is optimal.
On the other hand, the Gaussian Elimination method will not be optimal in general, because the
tridiagonal matrix can only occur in the 1-D situation. To solve the more general sparse matrices that
occur for 2-D boundary value problems, the Gaussian Elimination algorithm generally needsO(n3)
operations, orO(n2) if the sparsity is fully exploited [4].
Another choice to solve the discrete equations is to use iterative methods, such as the well-known
methods of Jacobi and Gauss-Seidel iteration. For sparse matrices suchiterative methods generally
run in O(n2) [13], which significantly improves the efficiency compared to un-optimized Gaussian
4
Elimination. However, the solution that can be obtained from iterative methods will always contain
some errors. Due to this fact, the second of the two errors in numerical methodoccurs. Recalling the
previous section, the first error is generated from the discretization itself, whilst this second error comes
from the solution of this discretization.
The examples that are provided in this report later on are using the standard Gauss-Seidel iteration
as smoother. However, Jacobi iteration can also be applied. For instance, to solve the simplified version
of equation 1.3 (that is,b = 0, c = 0 anda = 1):
ui−1−2ui +ui+1
(dx)2 = fi . (1.4)
the Jacobi iteration for updating current node will be the following:
uk+1i =
uki−1 +uk
i+1−dx2 fi2
. (1.5)
In this equation,k denotes the current updating level andi denotes each internal node.
Another family of iterations that is called the weighted Jacobi method [13] is similar. Before updat-
ing k+1, we compute an intermediate value on current node by the following equation:
u∗i =uk
i−1 +uki+1−dx2 fi2
. (1.6)
Then, a weighting factorw is used to calculatek+ 1 in equation 1.7. Note whenw = 1, it yields the
original Jacobi iteration:
uk+1i = uk
i +w(u∗i −uki ). (1.7)
An example application of using the weighted Jacobi method is from [6].
Both original Jacobi and weighted Jacobi methods computes all components of k+ 1 level before
using any of them [13]. However, the Gauss-Seidel method incorporates a simple change, that is, when
we update current nodeu, sinceuk+1i−1 is already available, instead of using the oldui−1, we use the latest
value ofui−1 [13]. This leads a new equation below:
uk+1i =
uk+1i−1 +uk
i+1−dx2 fi2
. (1.8)
Such small change not only improves the efficiency, but also reduces thestorage that it is needed
[13]. Similarly with Jacobi iteration, there are also various extensions based upon the original Gauss-
Seidel iteration. One of the variations is called symmetric Gauss-Seidel method, that instead of updating
in ascending order, we alternate between ascending and descending orders [13]. This strategy leads to
5
the red-black ordering, which separates the nodes into two groups. So the nodes in each group only
needs itself and some nodes from another group to update. This strategy isa major improvement in
terms of parallel computation [13].
1.4 Multigrid
The Multigrid method is commonly accepted as being one of the fastest numericalmethods for the
solution of elliptic PDEs and also for other types of PDEs, such as parabolicand hyperbolic PDEs [12].
Although the Multigrid method is now known as a single method, it is in fact a combination of solution
methods, and the Multigrid method uses them intelligently, so the results can be obtained in a more
efficient and accurate manner. The reason for this is due to the nature ofthe classical iterative methods,
such as Jacobi and Gauss-Seidel, the Multigrid method exploits the smoothing property of them. That
is the iterative methods tends to converge much quicker when the error has ahigh frequency. Therefore,
by applying a few number of iterative method on a fine grid, we can achieve alarge reduction on the
highest frequency components of the error. Then, by moving down to a coarse grid, since both the
number of nodes and the rate of error is reduced, the ratio between them, which is the error frequency
will keep sufficient high as it was on the fine grid before, so the iterative method can still quickly adapt
the highest frequency components of the error. Until a coarsest grid isfinally reached, the Multigrid
method then applies a coarse grid solver to obtain an “exact” solution and prolongates these results back
to the finest grid to obtain the final solution. Such procedure is substantially much less expensive since
the coarsest grid has fewer grid points. The Multigrid was first introduced by Achi Brandt in 1977, and
section 1.4.1 describes this paper in detail. Section 1.4.2 illustrates some recent applications that use the
Multigrid method.
1.4.1 Achi Brandt’s 1977 Paper
Achi Brandt in his 1977 paper, entitled “Multi-Level adaptive solutions to boundary-value problems”
[3], systematically describes the Multigrid method, and its applications. The paper is known as one of
the first, and one of the most important publications on this subject.
There are three main ideas in this paper, first of all, Brandt illustrate the useof the standard Multigrid
method on linear boundary value problems, that is known as V-cycles, shown in section 2.2.3. Then
Brandt moves on to the use of the Multigrid method with Full Approximate Storage (FAS) algorithm
6
on non-linear boundary value problems. That is, instead of only solving error equations on the coarse
grid, we solve the full problem on each grid to satisfy the non-linear nature. Finally, the idea of adaptive
mesh refinement is introduced, that is combining the FAS algorithm with the Multi-Level Adaptive
Technique (MLAT) to achieve the adaptive needs. This project is base upon this paper, these three ideas
are demonstrated on 1-D boundary value problems in later chapters.
During my research, various authors were identified as having made use of multiple grids in terms
of improving the solution [3]. The earliest idea that I have found was fromR.V. Southwell in 1946 [10].
His idea is solving the problem from the coarsest grid, and prolongates theresults back to the finest grid.
Therefore, the use of coarse grid is to improve the initial guess. However, there is a distinction between
the ideas of Southwell and Brandt. In Brandt’s Multigrid method, the use of coarse grid is to improve
the solution on the finest grid [3]. This was the fundamental contribution and, as the examples show in
Two-grid and Multigrid solvers in section 2.2.2 and section 2.2.3, the coarsest grid is used to solve the
error equation rather than the original problem itself. The combination of Southwell’s idea and Brandt’s
standard Multigrid method is now known as the Full Multigrid Method (FMG). It improves the standard
Multigrid method by giving a much better initial guess before V-cycle starts. The FMG is demonstrated
in chapter 3.
These different purposes of using the coarse and fine grid separates the Multigrid method itself into
various types. The differences are mainly based upon the structure of cycles that the Multigrid runs
on. For example, the FMG scheme mentioned above was one type of cycles and will be described and
implemented in chapter 3 and 4. There are also two main cycles, known as the V-cycle and the W-cycle.
A comparison between the V-cycle and The W-cycle is given by [6]. For the same iteration formV(2,2)
andW(2,2) 1, the V-cycle runs in 2.3 seconds, and the W-cycle runs in 3.1 seconds onthe same example
problem. However, the efficiency of the two cycles can vary based uponthe nature of the problem. The
V-cycle is implemented and treated as the standard Multigrid cycle in this project. However, the W-cycle
will be demonstrated in chapter 3.
1V andW stand for V-cycle and W-cycle, respectively. The first number, 2 in this case indicates the number of iterations
used by the pre-smoother. The second number, still 2 in this example, indicates the number of iterations used by the post-
smoother. The comparison of the V-cycle and the W-cycle is also implemented in this project and is given in chapter 3.
7
1.4.2 Some Recent Multigrid Papers
The Multigrid method was originally developed to solve boundary value problems in Brandt’s paper
[3]. However, over the past 30 years, the idea of the Multigrid has beenapplied to a broad spectrum of
problems, even including those which may have no connection with any kind ofphysical grid [13]. Fur-
thermore, such wider interpretation of the original Multigrid leads to a number of powerful techniques
with a remarkable range of applicability. According to Web of Science service1, Brandt’s paper has
been cited 1,384 times by other researchers since the first publication in 1977. In this section, I included
some of my findings from Web of Science on the Multigrid method. All examples chosen here cite to
Brandt’s 1977 paper. The first two examples are the top two citing articles in the Times Cited list: they
have then being cited 1,106 and 666 times by others, respectively. The thirdarticle is the most recent
one which cites to Brandt’s paper. And the last two examples are the two “Most Relevant” articles to
the topic of Brandt’s paper.
The first example is from Ghia K.N, et al in[11], where the Multigrid method is used in the field of
computational fluid dynamics. It takes the advantage in terms of efficiency over the iterative techniques
that solve the Navier-Stokes equations. By using the Multigrid method, it obtained the solution of a
model flow problem for different Reynolds numbers, as well as significantly increasing the density of
mesh refinements that were achievable. Ghia states in the paper that a fine-mesh solution can be obtained
in an efficient way by using the Multigrid method, even with 66,049 computationalpoints andReas high
as 10,000. Additionally, this paper includes some practical examples that usethe full weighting and the
optimal weighting. The full weighting strategy is used later on in this project.
The second example is from Berger M and Oliger J in[1]. For solving the hyperbolic partial dif-
ferential equations, M.J. Berger in her paper used the Multigrid method along with an adaptive finite
difference method. Berger presents an algorithm that uses automatic grid refinements for the numerical
solution of PDEs. Such an adaptive Multigrid method, compared to the Multigrid method on the uni-
form grids, only takes a fraction of the time to obtain the solution with same accuracy. However, Berger
denotes that the adaptive sub-grid generation still need to be further developed. Currently, such algo-
rithm is still not appropriated to some situation. For example, by using the automaticgrid refinement,
the strategy for steady state computations is still unknown.
A more recent example is from Bao H.J, et al in[14] in 2009. This paper illustrates a way of solving
1First search was in February 2010, latest update of cited times was on 3rd May 2010.
8
the Poisson equation on some gigantic meshes by implementing a modified version ofthe original
Multigrid method, which is called out-of-core Multigrid. This approach achieves handling meshes with
14M vertices using merely 84MB of memory. The main focus of this out-of-core Multigrid is that it can
accept user-defined residual equations and other Multigrid operators. However, such algorithm takes
much longer than the in-core algorithm in terms of performance.
Another use of the Multigrid method is described by Sullivan D.J, et al in [5].This paper demon-
strates implementations of a version of Multigrid method with real-space algorithms for electronic struc-
ture calculations. Such Multigrid version is called the Multigrid-accelerated solver. The authors also
illustrate how this solver works on Kohn-Sham equations, Poisson equations and additional examples.
A remarkable feature of this method is permitting efficient calculations even forill-conditioned systems
with long length scales and high energy cutoffs. Again, this implementation shows the advantage of the
Multigrid method in terms of efficiency.
A final example is from Achi Brandt and co-workers [8]. In this paper, Brandt describes an ap-
plication of the Multigrid method with the finite difference method for solving the following diffusion
equation,
−∇ · (D(x,y)∇U(x,y))+σ(x,y)U(x,y) = f (x,y)
in a bounded region whereD is positive, and the coefficients are discontinuous across internal bound-
aries. This paper focuses on the strong discontinuities inD, and the difficulty lies whenD jumps by
orders of magnitude. Brandt presents a variety of relaxation schemes, interpolation operators and resid-
ual weighting operators. They are tested in a number of different case when D changes significantly.
A large number of results show that different relaxation schemes and other Multigrid operators have
their own advantages and disadvantages. For example, some schemes areonly appropriated when the
problem is relatively easy. Others may suitable for large range ofD, but are not efficient in terms of
speed and accuracy.
1.5 Overview of Report
Base upon Brandt’s 1977 paper, this report is separated into three main parts, chapter 2 illustrates the
use of the standard Multigrid method on 1-D linear boundary value problems.Chapter 3 describes the
use of the standard Multigrid method with FAS algorithm on 1-D non-linear boundary value problems.
Chapter 4 demonstrates the Multigrid method with MLAT on general 1-D boundary value problems that
9
achieves the adaptive purpose.
Recalling from previous sections, the Multigrid methods that are implemented in thisproject use the
central finite difference and Gauss-Seidel methods as the discretization and smoothing methods. The
stopping criteria that are used in most coming examples is the difference between iterations. However,
this is discussed in detail in each example.
1.6 Methodology
This project uses C as the programming language (some of the codes are added in appendices). The
reasons for this include that C is a standard programming language that hasbeen widely used. It is also
a standard programming language for parallel programming, which can be an extension of this project.
Most of all however, I wished to use this project as an opportunity to practice programming in C.
There are a number of alternative languages can be used to preform theMultigrid methods. For
example, MATLAB and JAVA are two common used programming language in the scientific computing
area. Austin M and Chancogne D in [7] discusses these three languagesin detail. From Austin’s
experience, teaching engineering students who have no background understanding of programming to
use C is much more difficult than teaching in MATLAB or JAVA. The reason for this is MATLAB
and JAVA both provide a user-friendly interface, and a number of official maths functions are already
written. Furthermore, in Austin’s view, MATLAB efficiently solves a problemwhich can be converted
into matrices. Once the problem is solved, MATLAB can present the solution into a visible 2-D or 3-D
graphics in a very easy way. JAVA on the other hand, provides a graphical user interface on a variety of
platforms, and programming in JAVA benefits once if the program needs to access other programmes or
databases on the Internet.
C, compared to these two, is a relatively lower level programming language. Anumber of low level
commands must be manually setted. For example, if an array or a matrix needs to be stored, the memory
allocation must be done with the correct length, otherwise these data in the memory may be replaced
without acknowledging the user. However, C, in Austin’s view, is a ideal language for programming
using the finite element method. Additionally, C can handle a significantly large problem.
I have used all three languages in my degree course, in my opinion, C, even requires complicated
programming, is still the language choice of this project. The reason for this isthat I can manually
manage these functions that are used by the Multigrid method, therefore I gain a detailed understanding
10
of how the programme behaves. Also, using the official functions, on the other hand, still have risks,
since user can hardly access or understand these functions in detail in the official library. For example,
during my project, once I have used maths functionpow() which is provided in the C maths library,
the CPU time of calculating power 2 and power 3 is much less than power 4. This iscaused by the
pow() function behaviour. And thepow() andsin() functions are the only two functions for numerical
calculations that are from the official library in all of my codes.
The implementation of the codes follows an incremental methodology. For instance, in linear and
non-linear case, the single Gauss-Seidel iteration is first implemented which isused later in both the
Two-grid solver and the Multigrid solver. For the Multigrid method, the Two-grid solvers are also im-
plemented before the Multigrid solvers, because the Multigrid solvers are built on the Two-grid solvers.
This project therefore takes a major advantage of the code reuse. Furthermore, since later on the pro-
gramming of FMG and W-cycle still requires the Multigrid solver structure, therefore, modifications on
the Multigrid solver is relatively small and easy. Finally, to obtain the adaptive Multigrid solver, the
only change is made to the starting and ending nodes in the calculations to allow theMultigrid to be
applied on a local region.
This incremental methodology can also be seen from the schedule of this project in appendix B.
Week 1 was to understand the general concept of the Multigrid method and toprovide the single Gauss-
Seidel solver. Then in week 2, the Two-grid version was implemented by using the single Gauss-Seidel
solver as the smoother from week 1 and some other additional calculations (residual, error equation etc.).
A literature search was done in week 3 to understand the Multigrid method in detail and its applications.
In week 4, the Multigrid solver was implemented by adding the recursion approach in to the Two-grid
solver which was done one week ago. Same process was followed when the Multigrid method applied to
non-linear boundary value problems. Having the working non-linear Multigrid solver, the FMG and W-
cycle was then implemented by modifying the original non-linear Multigrid solver.Finally, the adaptive
Multigrid solver was also another modified version of the non-linear Multigrid solver which allowed the
solver to start and finish in a local region on the finest grid.
11
Chapter 2
Linear Multigrid
This chapter describes the first idea of Brandt’s 1977 paper, that is thestandard Multigrid method,
generally known as the V-cycle Multigrid method (another version of the Multigrid method is the W-
cycle, and it is discussed in next chapter in detail). Section 2.1 explains somegeneral concepts that are
needed by the standard Multigrid method. Section 2.2 illustrates the implementation ofthe standard
Multigrid method. Furthermore, section 2.2.1 firstly describes an implementation ofthe single Gauss-
Seidel iteration; section 2.2.2 then shows the implementation of the Multigrid method which only uses
two grids, that are a coarse grid and a fine grid; then section 2.2.3 demonstrates the complete Multigrid
solver which is built on the Two-grid solver. Section 2.2.4 gives a validation of the Multigrid solver to
ensure the full convergence is achieved on the whole domain. Section 2.3 illustrates a general view of
the development of my codes. Section 2.4 presents some results of these three solvers and discussions
among them. Section 2.5 concludes the standard Multigrid method.
2.1 Description of Linear Multigrid Method
The use of the Multigrid method on linear boundary value problems is based upon a very important
relation between residual and error. However, before the relation comes along, let us introduce the
general ideas of the residual and the error.
12
First of all, when discretized a linear boundary value problem can be written in the following form,
Au= b, (2.1)
where matrixA contains the coefficients,u denotes the exact solutions of this problem,b denotes the
right-hand side values.
However, the results which are obtained by using an iterative method (e.g. Jacobi or Gauss-Seidel)
can only be an approximate solution:uapproximatedenotes such solution. Hence, the true difference
betweenu anduapproximateis the error, shown in equation 2.2. Note thate is the exact error:
e= u−uapproximate. (2.2)
The residual is then calculated as following. Note the residual can alwaysbe calculated during the
process:
r = b−Auapproximate. (2.3)
Now the relation between the error and the residual can be observed:
Ae = A(u−uapproximate)
= Au−Auapproximate
= b−Auapproximate
= r.
Equation 2.4 denotes this relation, and it is called the error equation:
Ae= r. (2.4)
However, to exactly solve this error equation is as difficult as exactly solving the original problem
(equation 2.1). Therefore, instead of the exact errore, an approximate solutioneapproximateis obtained,
further details of this appear below.
Using this approximate error, a better approximation ofuapproximatecan be simply calculated, shown
as equation 2.5:
ubetter approximate= eapproximate+uapproximate. (2.5)
To summarise, these ideas are used in the progress of the Multigrid solver in section 2.2.3 (also in the
Two-grid solver in section 2.2.2, however, the Two-grid solver does nothave the recursion behaviour).
13
The approximate values ofu are calculated by using Gauss-Seidel iteration on the finest grid. Then
residualr may also be calculated. These two computations recursively run until the coarsest grid is
reached. Then the approximation of error is therefore obtained from exactly solving the error equation
on the coarsest grid. Such errors are recursively interpolated backto the finest grid for correction. The
improved valueubetter approximatetherefore, can be obtained on the finest grid.
2.2 Implementation
2.2.1 Finite Difference Method
An initial solver was written for a simple 1-D model boundary value problem:
d2udx2 = −π2sin(πx), (2.6)
within region:
0≤ x≤ 1,
with boundary conditions fixed at each end of the domain:
u(0) = u(1) = 0.
One way of solving is, first of all, to discretize the problem by using the central finite difference
method (details on discretization and central finite difference method are described previously in last
chapter). Then this continuous problem becomes a finite number of algebraic equations corresponding
to internal nodes, as shown in the equation below:
ui−1−2ui +ui+1
(dx)2 = fi . (2.7)
In this equation,i denotes the number of each internal nodes,dx denotes the length between two
adjacent nodes. Now, to solve this equation by using the Gauss-Seidel iterative method is relatively
easy. As for each internal node, its value is calculated by using the value of itself and the value of its
two neighbours (details on iterative method is discussed previously in section1.3). Table 2.2 shows
results that use the Gauss-Seidel method to solve this problem.
14
2.2.2 Two-grid Solver
Having implemented an initial solver (the Gauss-Seidel method in previous section), as a first step
towards Multigrid, a two-grid solver was implemented next. To illustrate how this works, consider a
coarse grid that has four intervals and a fine grid that has eight intervals.
First of all, an initial guess is chosen on the fine grid, as illustrated in Figure 2.1.
Figure 2.1: Initial guess
Then, two Gauss-Seidel iterations are applied on the fine grid in order to smooth out the high fre-
quency components of the error. See Figure 2.2 for the resulting approximation.
Figure 2.2: Two Gauss-Seidel iterations are applied
Residuals are then calculated by using the values of these latest estimates of the solution at the
internal nodes. See Figure 2.3. Note that the second last residual is always zero because the order in
which the nodes are updated is from left to right and the boundary valuesare exact.
Figure 2.3: Residuals are calculated
The next step is to restrict these residuals onto the coarse grid. As there are only 3 internal nodes
15
on the coarse grid, therefore restriction is carried out. Here, the algorithm for restriction is relatively
simple: called full weighting. This means that the nodes on the coarse grid takehalf of the residual
values from corresponding nodes on the fine grid, and quarter from the two neighbours. The result of
the restriction operation is shown in Figure 2.4.
Figure 2.4: Restriction using full-weighting
The next step is to solve the error equation “exactly” on the coarse grid byusing these residuals, as
the right-hand side and the coarse grid finite difference matrix on the left-hand side. The resulting error
is shown in Figure 2.5, and was computed by applying Gauss-Seidel iterations until convergence.
Figure 2.5: Solving error equation “exactly”
Now the approximate errors are known on the coarse grid. However, wewish to have the approx-
imate errors on the fine grid, hence interpolation (or prolongation) is applied. To achieve this, the
corresponding nodes on the fine grid take the error exactly from the coarse grid and those nodes in the
middle of two coarse grid nodes take the average these nodes. See Figure2.6.
Correction is then carried out. This simply adds up the solution values on the fine grid that were
calculated before and the errors that have just been estimated. The result is shown in Figure 2.7.
Finally, two more Gauss-Seidel iterations are taken on the fine grid after correction. See Figure 2.8
for the resulting solution.
The process is one complete V-cycle, that is illustrated in summary in Figure 2.9.
The discrete form of equation 2.1 was solved on a number of fine grids as shown in Table 2.3, which
16
Figure 2.6: Prolongation using piecewise linear interpolation
Figure 2.7: Correction
Figure 2.8: Final Results from one V-cycle of the Two-grid solver
17
Figure 2.9: Two-grid V-cycle
illustrates how many V-cycles were required to converge, based upon a stopping criteria of the difference
between iterations.
2.2.3 Multigrid Solver
For the same discretized boundary value problem illustrated in previous section (equation 2.1), the V-
cycle of Multigrid version is similar to that for the Two-grid solver. However,there are now more than
two grids in this process. It starts on the finest grid, between any fine andcoarse grids (except the
coarsest grid), the Two-grid approach is recursively carried out inthe usual manner. On the coarsest
grid, the error equation is then “exactly ” solved, and such errors are recursively interpolated back to a
finer grid with, as the Two-grid approach until the finest is reached. Figure 2.10 shows the Multigrid
V-cycle.
Such an approach is based upon a recursive algorithm. Pseudo code of a Multigrid version is shown
in figure 2.11 and demonstrates this recursive function. The first call ofthis function is somewhere in
main, with calculated values of right-hand side, initial guess on the domain, and thenumber of intervals
of the finest grid as three inputsb, u, N. When the Multigrid function is called, the first half of the steps
18
Figure 2.10: Multigrid V-cycle
are the same as the Two-grid solver. Then, it recursively calls the function itself, passing the current
residual values as the values of right-hand side, the initial guess of error as the values on the domain, and
each time it halves the number of intervals on the grid. This process recursively runs until the coarsest
grid is reached, where the error equation is solved “exactly”. Then it interpolates the values back to the
finest grid with as the Two-grid solver. Evaluation has been done on the Multigrid solver, and details
are shown in Table 2.4.
2.2.4 Validation Of Multigrid Solver
A validation is done for this Multigrid solver to identifying that not only the central node but also all
nodes are converged. Table 2.1 shows the results, that are by starting on a eight nodes fine grid Multigrid
solver, when the number of intervals is doubled and the level of grids increases each time, the errors of
the corresponding nodes on later finest grids can show the quadratic convergence. Note “Finest Grid =
8” means there are 8 intervals on the finest grid, which leads to 9 nodes (including two boundary points),
the indexes of the nodes start with 0, and node 0 and 8 indicates the boundary points, which the errors
on them are desired to be zero.
19
Figure 2.11: Pseudo code of Multigrid
20
Nodes Finest Grid = 8 Finest Grid = 16 Finest Grid = 32 Finest Grid = 64
0 0.000000 0.000000 0.000000 0.000000
1 0.004956 0.001231 0.000307 0.000076
2 0.009157 0.002276 0.000568 0.000142
3 0.011964 0.002973 0.000742 0.000185
4 0.012950 0.003218 0.000803 0.000200
5 0.011964 0.002973 0.000742 0.000185
6 0.009157 0.002276 0.000568 0.000142
7 0.004956 0.001231 0.000307 0.000076
8 0.000000 0.000000 0.000000 0.000000
Table 2.1: Errors On All Nodes Of The Finest Grid = 8, And Errors On The Corresponding Nodes Of
The Finest Grid =16, 32 And 64 (Stopping Criteria: Difference Between Iterations, Tolerance: 10−6.)
It is clear from the table 2.1 that the Multigrid solver has the symmetric structure which is we
are expecting. Additionally, the reduction of the errors on all nodes shows this quadratic convergence
behaviour.
2.3 Software
All three implementation codes are included in the appendices, and all codes use the standard maths
library (with command−lm in GCC complier). Appendix C shows the single Gauss-Seidel solver. In
the ”main” function, it simply runs a loop to apply the Gauss-Seidel iteration forevery internal node.
Appendix D shows the Two-grid solver. Since there are a number of different calculations, I have
coded them into functions, and in ”main”, it just simply call these functions in a certain order. There
is additional tolerance in the code, ”Tol2”, which controls the stopping criteria of the error equation
calculation. Such tolerance is smaller than the ”Tol1” which controls the whole V-cycle, the reason for
this is on the coarsest grid, we need to solve the error equation as exact as we can, so the updating
process can achieve improving the answer. This second tolerance is used in all Two-grid and Multigrid
solvers. Appendix E indicates the Multigrid solver, the most important figure inthe Multigrid solver but
not in Two-grid solver is the recursion in the ”Multigrid” function. AppendixF then shows the Multigrid
solver on a more general problem, all three coefficients are defined at the beginning of the code, so it
21
can be efficiently changed and tested. Such code follows the same structure as the Multigrid solver in
appendix E.
2.4 Evaluation
2.4.1 Finite Difference Method
The table 2.2 shows the results from using the Gauss-Seidel iteration to solvea linear problem in 1-D
(equation 2.6, discretized to lead to equation 2.7). Note the stopping criteria which is used here: when
the maximum difference between the results of previous and current iterations is smaller than a fixed
tolerance 10−6, the programme terminates and gives results.
Number of intervals Number of iterations required
256 6777135
512 35506324
1024 133686663
2048 243822264
4096 420085575
8192 967258808
Table 2.2: Gauss-Seidel Iteration (Stopping Criteria: Difference Between Iterations, Tolerance: 10−6.)
It is clear from the table 2.2 that when the number of intervals is doubled eachtime, the number of
iterations that are required for the Gauss-Seidel method to satisfy the stopping criteria are approximately
doubled. This also verifies that the Gauss-Seidel method runs in a time of at leastO(n2), given that the
cost of each iteration isO(n)
2.4.2 Two-grid Solver
The results which are shown in this section are from solving the discrete form of equation 2.1 on a
number of combinations of coarse and fine grids. In each case, the number of intervals on the coarse
grid is half of the number of intervals on the fine grid.
The table 2.3 shows the number of divisions on each finite difference grid,along with the error at
the centre point to the true solution 1 (since the true solution is known in this case) and the number of
22
V-cycles required for the Two-grid solver to converge. Convergence is assumed when the infinity norm
of the difference between solutions after consecutive V-cycles is less than a prescribed tolerance.
Intervals on Finest Level Error to the true solution V-cycles
8 0.0129507497169088 4
16 0.0032189707046060 5
32 0.0008035993807507 5
64 0.0002009021337843 5
128 0.0000501839819600 5
256 0.0000120155551815 5
Table 2.3: Two-grid Solver (Stopping Criteria: Difference Between Iterations, Tolerance: 10−6.)
With a fixed tolerance 10−6 it is clear that as the density of fine grid increases, the V-cycles taken
by the Two-grid solver stays constant. Furthermore, the errors which are obtained from increasing the
number of intervals on the grids are approximately quartering each time.
From these observations, it is clear that the error is proportional todx2, wheredx is the difference
between two adjacent nodes. This is consistent with the use of the second order central finite difference
approximation that is used here.
2.4.3 Multigrid Solver
In this section, the Multigrid solver solves the same problem as the Two-grid solver solved previously.
Having the Multigrid solver validated in table 2.1, now table 2.4 then shows the number of the V-cycles
which are taken by the Multigrid solver to converge (same stopping criteria asbefore) and the cost of
CPU time. Additionally, a comparison to the standard Gauss-Seidel is made. Boththe Multigrid solver
and the Gauss-Seidel iteration are tested on a number of grids with the numberof intervals up to one
million (in all cases the coarsest grid used in the Multigrid solver contained four intervals). Therefore,
these results are believed to be general enough to show the typical natureof both solvers.
In comparison with the results shown in Table 2.2, the Gauss-Seidel iteration here uses a different
stopping criteria, that is, instead of calculating the difference between iterations, this time we calculate
the residual of current values. The programme only terminates when the maximum value of current
residuals is smaller than the tolerance. The tolerance 10−6 is fixed as before.
23
Finest Grid Gauss-Seidel iterations CPU time V-cycles CPU time (seconds)
1024 1603510 40.817s 6 0.001
2048 6414335 332.098s 6 0.002
4096 25660670 44m20.020s 6 0.003
8192 quit waiting 6 0.005
16384 after two hours 6 0.012
32768 6 0.026
65536 6 0.050
131072 6 0.095
262144 6 0.206
524288 6 0.445
1048576 6 0.898
Table 2.4: Multigrid Solver (Stopping Criteria: Difference Between Iterations, Tolerance: 10−6.)
The reason for using a different stopping criteria on the Gauss-Seideliteration is the very slow
convergence when the number of intervals is getting too large. Hence, although the answer is still
not totally converged, the maximum difference can fall to below the tolerance, and therefore cause the
program terminate while the error is still large. On the other hand, the Multigrid solver with the stopping
criteria on the difference between iterations can achieve a total convergence, and therefore such stopping
criteria does not need to change.
It can be seen from the table that with doubling the number of intervals on the grids, the number of
iterations taken by the Gauss-Seidel solver are approximately quadrupledeach time, hence CPU time
increases by a factor of about eight! However, with the same fixed tolerance 10−6, and also the same
increasing on the the density of finest grid and levels of grids, not only does the number of V-cycles
stay constant, but also the time used for the calculation only increases linearly. Therefore, it verifies the
Multigrid method, so far for 1-D linear boundary value problem, runs inO(n).
2.4.4 Multigrid Solver On A More General Problem
Having evaluated the Two-grid and the Multigrid solvers on the simplified version of a 1-D boundary
value problem (equation 2.6), a more general boundary value problem isimplemented and tested in this
24
section. This more general problem takes the form:
ad2udx2 +b
dudx
+cu= f (x), (2.8)
with the same domain and boundary conditions as before. Applying the central finite difference
gives the following equation at each internal node:
aui−1−2ui +ui+1
(dx)2 +bui+1−ui−1
2dx+cui = −π2sin(πxi). (2.9)
Comparing to equation 2.8, the equation 2.6 only has the diffusion part of current equation, whereas
now we also have non-zero advection (b 6= 0) and reaction (c 6= 0) terms. The table 2.5 shows the number
of V-cycle for convergence of the Multigrid solver while the three coefficientsa, b andc are changing
for each run. Note the stopping criteria in this section always uses the differences between iterations
(infinity norm< 10−6).
Finest Grid a,b,c = 1 a = 1 andb,c = 0.5 a = 1 andb,c = 0.1
8 8 6 6
16 7 7 7
32 7 7 7
64 7 6 6
128 7 6 6
256 7 6 6
512 7 6 6
1024 7 6 6
2048 7 6 6
Table 2.5: Number of V-cycles with different coefficientsb andc (Stopping Criteria: Difference Be-
tween Iterations, Tolerance: 10−6.)
From the results above, it is clear that the reductions of coefficientsb andc have little effect on
the current Multigrid solver, since the diffusion term is dominant in each case. Next, in table 2.6 we
will decrease the coefficienta while keepingb andc constant (b,c = 1). This will lead our program to
the situation which is called advection domination (or convection-dominated). Such situation happens
when, for example, in a second order differential equation, the secondorder term becomes very small
25
so that the ratio of these two coefficientsba gets large, then in this case the changes of the first order term
dominates the changes of the equation.
Finest Grid a = 1(ratioba = 1) a = 0.5(ratiob
a = 2) a = 0.1(ratioba = 10) a = 0.05(ratiob
a = 20)
8 8 10 14 22
16 7 13 18 17
32 7 13 13 14
64 7 13 12 15
128 7 13 11 14
256 7 13 11 14
512 7 13 11 14
1024 7 13 11 14
2048 7 13 11 14
Table 2.6: Number of V-cycles with different ratioba (Stopping Criteria: Difference Between Iterations,
Tolerance: 10−6.)
In table 2.6, it is clear that while coefficienta decreases, and the ratio betweenb anda gets larger,
it leads an increase on the number of V-cycles. Such results show when decreasing the coefficienta
(enlarge the ratio betweenb anda), the problem itself get harder to solve each time, so the Multigrid
solver needs more V-cycles to satisfy the stopping criteria. Note however that the number of V-cycles
does not grow with the finest grid in any of these cases!
Furthermore, if the coefficienta gets too small, this program will fail to converge at all. The reason
of this is the Multigrid solver could not solve such an equation on a very coarse grid. Because of the in-
accurate solution which is generated from a very coarse grid, it will lead toa worse correction. One way
of avoiding this situation is increasing the coarsest grid on which we solve theproblem “exactly”. Table
2.7 shows the failure of convergence to the coarsest solution whena is very small (the ration between
b anda is very large), and the solution of this failure where we increase the coarsest grid. Note in this
example, whena = 0.04 (the ratioba = 25), the failure of convergence occurs and the desired output is
1.0. Note that the cause of the problem here is that we have used a central difference approximation to
the differential equation and this is unstable when the problem is advection dominated, unless the grid
is sufficiently fine. The smaller the ration betweenb anda the finer the grid needs to be for the central
26
difference scheme to be stable [12].
Coarsest/Finest Grids Output V-cycles Coarsest/Finest Grids Output V-cycles
4/8 -41029.050862 11 16/32 1.0021992947 283
4/16 1.371829 11 16/64 1.0005502382 172
4/32 4.530338 11 16/128 1.0001379623 153
4/64 0.681204 11 16/256 1.0000350790 150
4/128 0.337222 11 16/512 1.0000094019 148
4/256 0.328606 11 16/1024 1.0000028673 147
4/512 0.332111 11 16/2048 1.0000012470 147
4/1024 0.333094 11 16/4096 1.0000008443 147
Table 2.7: Advection Domination: Converged Value Of Output (At Centre OfDomain) Should Be 1.0
(Stopping Criteria: Difference Between Iterations, Tolerance: 10−6.)
2.5 Discussion
This chapter shows in detail the linear Multigrid method and its implementations (the Two-grid solver
and the Multigrid solver) as well as the general Gauss-Seidel iteration andits application. I have shown
the Two-grid solver step by step to solve a 1-D model boundary value problem, and also explained the
Multigrid solver which is straightforwardly built from the Two-grid solver. The evaluations illustrate
the advantage of the Multigrid method which is compared with the general Gauss-Seidel iteration. Ad-
ditionally, the results from the Two-grid and the Multigrid solvers on a simplified 1-D model boundary
value problem are also shown. For a more general linear problem, we have seen the generalisation of the
Multigrid method. The possible problem arising from the use of the central differences when the diffu-
sion term is small, is illustrated and a solution to this problem is given and demonstrated. Having this
linear Multigrid solver implemented and tested, it is time to move towards solving non-linear problems.
27
Chapter 3
Non-Linear Multigrid
This chapter illustrates the second concept of Brandt’s 1977 paper, that is the standard Multigrid method
with the Full Approximation Storage (FAS) algorithm which allows the original Multigrid method to
solve non-linear boundary value problems. The examples that are provided in this section are still in 1-D.
Section 3.1 describes the general idea of the non-linear Multigrid method andthe differences between it
and the linear one. Section 3.2 demonstrates its implementations on a non-linear model boundary value
problem, again discretized with the finite difference method, using a Two-gridsolver and a Multigrid
solver. Section 3.3 discusses the implementations of the software. Section 3.4 then evaluates these three
implementations. Section 3.5 introduces two new strategies on the non-linear Multigrid solver, these are
the W-cycles and the Full Multigrid algorithm. Section 3.6 provides a discussionof the software that
are implemented of these two strategies. Section 3.7 further evaluates these twonew technologies, and
shows how they improve (or otherwise) the original non-linear Multigrid solver. Section 3.8 gives the
conclusion of this chapter.
3.1 Description of Non-Linear Multigrid Method
For Multigrid methods, the major distinction between linear and non-linear problems is that the rela-
tion between residual and error of linear problem shown in previous section does not exist in the non-
28
linear situation. Since the standard Multigrid method significantly depends uponsuch relation, there
are various modifications that need to be made. First of all, the full approximation storage algorithm
is introduced by Brandt [3]. The general concept of this algorithm is, instead of storing a correction
eapproximation, we store the full current approximationuapproximateon every level of grid. Secondly, a
non-linear version of Jacobi or Gauss-Seidel iteration, based upon the Newton’s method, is applied in
this process in order to smooth the non-linear equations [13].
For a general non-linear boundary value problem, the discretized equations can be written in the
following form. Note, f denotes these values on the fine grid,c denotes those values on the coarse grid:
Af (uf ) = f f. (3.1)
Now A, instead of being a single matrix as in the linear solver, becomes a vector-valued function that
takesu as input. However, the residual can be calculated in the same manner. The following equation
demonstrates this process:
r f = f f −Af (uf ). (3.2)
In the manner of linear solver, we may restrict the values of residual to a coarse grid. However,
for non-linear problems, we now restrict both residualr and the approximate valueu, as shown below.
Note, the full weighting algorithm can still be applied in the restriction:
rc = Icf (r
f ),
u∗ = Icf (u
f ).
Now, u∗ contains the restricted values from fine grid, and it also holds the connection between two
grids. Next step is, instead of solving an error equation to obtaineapproximate, we solve the full problem
with a modified right-hand side on the coarse grid. See the following equation:
Ac(uc) = f c +(rc− ( f c−Ac(u∗))
= rc +Ac(u∗).
The result of solving this equation “exactly” isuc, then the termeapproximatecan be obtained by
equation 3.3:
eapproximate= uc−u∗. (3.3)
After the same interpolation process as for the standard Multigrid method, the correction follows in
the same manner of the linear solver, shown in equation 3.4:
unew= uf +eapproximate. (3.4)
29
In conclusion, the major modifications are, the additional computation ofu∗, and on the coarse grid,
instead of solving the error equation, it now solves the full problem itself but with a modified right-
hand side. Although the modifications are relatively small, the distinction betweenlinear and non-linear
solves is clear and remarkable. In the linear solver, the relation between residual and error holds the
connection between grids. Now, the communication is passed throughu∗ and the modified right-hand
side in the non-linear solver. This method clearly has several advantages. It is suitable for general non-
linear boundary value problems. Moreover, this method can be used for composite grids, that is, with
adaptive mesh refinement [3]. A more complex non-linear example is given by [6]. In this application,
it uses Multigrid with FAS on a 2-D non-linear time dependent problem, and the adaptive purpose has
also been considered.
3.2 Implementation
3.2.1 Non-linear Finite Difference Method
The same programming principle is used as the previous chapter. First of all,an initial solver was written
for a simple non-linear 1-D model boundary value problem:
d2udx2 +u2 = sin(πx)2−π2sin(πx), (3.5)
within region:
0≤ x≤ 1,
with boundary conditions fixed at each end of the domain:
u(0) = u(1) = 0.
This problem is selected to have the known solution (u(x) = sin(πx)). There are a variety of ways
to solve a non-linear equation, here we choose a Newton-like Gauss-Seidel iteration as our solution
method. As before, we firstly discretize the problem by using the central finite difference method. This
gives the discrete system as the following equation:
ui−1−2ui +ui+1
(dx)2 +u2i = sin(πxi)
2−π2sin(πxi).
Then to solve such a finite number of algebraic equations, we use the Newton-like Gauss-Seidel
iteration, so the updating of internal nodes follows the equation below:
30
ui = ui −
ui−1−2ui+ui+1dx2 +u2
i − f (xi)
2ui −2
dx2
. (3.6)
In this equation,i denotes the number of each internal nodes,dx denotes the length between two
adjacent nodes,f (xi) denotes the right hand side value, which in this case issin(πxi)2− π2sin(πxi).
Table 3.2 and table 3.3 show the results that use this Newton-like Gauss-Seidel method to solve the
non-linear model boundary value problem.
3.2.2 Non-linear Two-grid Solver
Having implemented an initial non-linear Gauss-Seidel solver (the Newton-likeGauss-Seidel iteration
in previous section), the non-linear Two-grid solver, which closely follows the strategy of the linear
Two-grid solver, is then implemented.
First of all we make an initial guess on the fine grid,uf in this case. Then we apply two non-linear
Gauss-Seidel iteration sweeps on the fine grid in order to smooth the error.At this point, the residual
on the fine grid can be calculated in the usual manner. The next step is to restrict the residual and the
value ofuf to the coarse grid. Such restrictions are same as the linear one, and the full-weighting can
also be applied in the usual way. Now we are holding the value ofrc andu∗ which are described earlier
in the first section. Noterc denotes the value of residual on the coarse grid andu∗ denotes the value
of restricteduf . Before the coarse grid solver runs, an additional term is introduced, the modified right
hand sidef c. Such term is calculated by using the restricted residual and theu∗ on all internal nodes
of the coarse grid. The equation below shows the calculation of thef c, as described in the previous
section:
f ci = rc
i +u∗i−1−2u∗i +u∗i+1
(dxc)2 +(u∗i )2. (3.7)
The next step is to solve the coarse grid problem, as the main concept of the FAS algorithm, we
need to store the actual value ofu on the coarse grid whereas we only stored the restricted residual in
the linear case. So a new termuc is introduced, which contains the actual solution value on the coarse
grid. An initial guess is made to this new term, at this point, we use the best approximation that we
currently have, that isu∗. Then the coarse grid solver uses the non-linear Gauss-Seidel iterationto
obtain the “exact” solution on the coarse grid. The following equation showsthis process. Note the
f c(xi) is the modified right hand side that we calculated in equation 3.7.
31
uci = uc
i −
uci−1−2uc
i +uci+1
(dxc)2 +(uci )
2− f c(xi)
2uci −
2(dxc)2
. (3.8)
After the coarse grid solver converged, an error can be obtained by subtracting the initial valueu∗
and the current valueuc, such error is then interpolated by the linear interpolation algorithm that we
used in the linear case to the fine grid. The old valueuf on the fine grid is updated by this error term.
Finally, two non-linear Gauss-Seidel iterations are taken as the post-smoother. This is one non-linear
Two-grid V-cycle. Table 3.4 in the evaluation section shows the results of theTwo-grid solver.
3.2.3 Non-linear Multigrid Solver
As with the linear Multigrid solver, after the implementation of the non-linear Two-grid solver, a non-
linear Multigrid solver can be constructed in the same manner. The main difference between the non-
linear Two-grid solver and the non-linear Multigrid solver is still the recursionthat only the non-linear
Multigrid solver has.
This recursive approach first of all, makes an initial guess and calculates the right hand side on the
finest grid, then two non-linear Gauss-Seidel iterations are applied for all the internal nodes on such
grid as the pre-smoother in order to smooth the high frequency error. Theresidual on the finest grid is
calculated. The next step is restricting both residual and the value ofu onto a coarser grid. Additionally,
the modified right hand side is also obtained and stored on the coarse grid. This process is recursively
called on every grid (except the coarsest grid), until the coarsest grid is reached. Then the coarse grid
problem is solved “exactly” on the coarsest grid by using the non-linear Gauss-Seidel iteration. Since
the main reason of using the FAS is to store the actual value ofu on every grid (include the coarsest grid),
at this point, we have obtained an “exact” solution on the coarsest grid. Bysubtracting this solution from
the restricted value ofu from a finer grid, we can then obtain the error term. Such error is interpolated
recursively back to a finer grid and simple correction is applied by adding the old valueu on that grid
with this interpolated error on every grid (include the finest grid). This interpolation process runs until
the finest grid is reached. Finally, an additional two non-linear Gauss-Seidel iterations are taken as the
post-smoother on each grid immediately after the coarse grid correction. Thisis our one non-linear
Multigrid V-cycle. Evaluation is done and shown in table 3.5 and table 3.6 in the next section.
32
3.2.4 Validation Of Non-linear Multigrid Solver
In this section, we apply a validation on the non-linear Multigrid solver to show that not only the central
node but also all nodes are converged. Table 3.1 shows the results, that are similarly with the validation
of the linear Multigrid solver in the previous chapter, by starting on a eight nodes fine grid Multigrid
solver, when the number of intervals is doubled and the level of grids increases each time, the errors of
the corresponding nodes on later finest grids can show the quadratic convergence. Note “Finest Grid =
8” means there are 8 intervals on the finest grid, which leads to 9 nodes (including two boundary points),
the indexes of the nodes start with 0, and node 0 and 8 indicates the boundary points, which the errors
on them are desired to be zero.
Nodes Finest Grid = 8 Finest Grid = 16 Finest Grid = 32 Finest Grid = 64
0 0.000000 0.000000 0.000000 0.000000
1 0.005945 0.001474 0.000367 0.000091
2 0.011065 0.002742 0.000684 0.000170
3 0.014544 0.003602 0.000898 0.000224
4 0.015778 0.003907 0.000974 0.000243
5 0.014544 0.003602 0.000898 0.000224
6 0.011065 0.002742 0.000684 0.000170
7 0.005945 0.001474 0.000367 0.000091
8 0.000000 0.000000 0.000000 0.000000
Table 3.1: Errors On All Nodes Of The Finest Grid = 8, And Errors On The Corresponding Nodes Of
The Finest Grid =16, 32 and 64(Stopping Criteria: Difference Between Iterations, Tolerance: 10−6.)
It is clear from the table 3.1 that the non-linear Multigrid solver also has the symmetric structure
which occurred in the linear case. Additionally, the reduction of the errorson all nodes shows the same
quadratic convergence behaviour.
3.3 Software
Appendices G to I show these three implementations that has been discussed previously, apart from
some additional calculation (for example, the modified RHS) and the Newton-likeGauss-Seidel iterative
33
solver, it is very similar to the linear ones. Appendix G uses the single Newton-like Gauss-Seidel
iteration for all internal nodes. Appendix H and appendix indicates the non-linear Two-grid solver and
the non-linear Multigrid solver, which again are coded into function model, soin ”main”, all function
are called in a certain order. Note the second tolerance is used just in the non-linear Multigrid solver,
and a new stopping criteria is used in the Two-grid solver which is explained inthe evaluation section.
Appendix J is the non-linear Multigrid solver on a more general problem, such code uses ”pow()”
function from the standard maths library, and variable ”P” is defined at thebeginning to indicate the
power number. All codes need−lm command when complied in the GCC complier.
3.4 Evaluation
3.4.1 Finite Difference Method
This section shows the results from using the Newton-like Gauss-Seidel iteration to solve a 1-D non-
linear model boundary value problem that is discussed previously. Table3.2 shows the Newton-like
Gauss-Seidel solver with a stopping criteria based upon the maximum difference between iterations
(the same stopping criteria used in the linear Gauss-Seidel case). However, a fixed tolerance now is
10−9 which is much smaller than the tolerance used in the linear one. Note the outputs include only the
centre number from that grid, which desires to be 1.
It is clear from the table 3.2 that the Newton-like Gauss-Seidel iteration behaves similar to the linear
Gauss-Seidel iteration. That is when the number of intervals doubles eachtime, the number of iterations
that are required for the Newton-like Gauss-Seidel iteration to satisfy this stopping criteria are more
than doubled. This behaviour shows that Newton-like Gauss-Seidel runs in a time of worse thanO(n2),
given that the cost of each iteration isO(n).
However, from table 3.2, it also shows that when the number of intervals increases to 512, the
Newton-like Gauss-Seidel iteration is not fully converged, even with the small tolerance that is used.
The cause of this is the very slow convergence nature when the number ofintervals is getting large.
Therefore, a new stopping criteria is introduced here, that is forcing theNewton-like Gauss-Seidel
solver to run for a pre-determined number of iterations. Since for the fourintervals case in table 3.2, it
requires 36 iterations, then we say let the Newton-like Gauss-Seidel solver runs for 50 iterations for the
four intervals case, and while when we are doubling the number of intervalseach time in table 3.3, the
number of forced iterations increases by a fact of four to insure that now the Newton-like Gauss-Seidel
34
Number of intervals Outputs Number of iterations required
4 1.0657035997161211 36
8 1.0157788942088244 143
16 1.0039073857958727 536
32 1.0009744414503379 1978
64 1.0002429988171111 7223
128 1.0000588617310844 26117
256 1.0000071957802925 93352
512 0.9999717213357174 328934
1024 0.9998726168109970 1137826
Table 3.2: Newton-like Gauss-Seidel Iteration (Stopping Criteria: Difference Between Iterations, Tol-
erance: 10−9.)
runs for a sufficient number of iterations. Table 3.3 shows the results of using the same Newton-like
Gauss-Seidel iteration on the same problem with this new stopping criteria. Notethe outputs still only
include the centre number from that grid, which this number again is desires tobe 1.
Now, with this new stopping criteria, from table 3.3, the Newton-like Gauss-Seidel iteration has
achieved a full convergence. The full convergence is demonstrated since, when by doubling the number
of intervals each time, the error approximately reduces by a fact of four each time: thus showingO(dx2)
convergence.
3.4.2 Non-linear Two-grid Solver
This section shows the results of using the non-linear Two-grid solver on anumber of different com-
binations of grid (the number of intervals of a coarse grid is always half ofthe number of intervals of
a fine grid). Additionally, there are two stopping criteria used in the non-linearTwo-grid solver. The
first one is the new stopping criteria that we introduced in the previous section, that is forcing the solver
to run for a per-determined number of iterations; the second one is the maximumdifference between
iteration that we have used several times before. The first stopping criteria is used in the coarse grid
solver. The reason for using this is the coarse grid solver in the non-linear case is just the Newton-like
Gauss-Seidel iteration (however as discussed before, coarse grid solver usesu∗ as the initial guess and
35
Number of intervals Outputs Number of iterations forced
4 1.0657036008988092 50
8 1.0157789010735083 200
16 1.0039074163980057 800
32 1.0009745658106777 3200
64 1.0002434990909774 12800
128 1.0000608658801695 51200
256 1.0000152159091078 204800
512 1.0000038039365309 819200
1024 1.0000009509757883 3276800
Table 3.3: Newton-like Gauss-Seidel Iteration (Stopping Criteria: ForcingTo Run A Certain Number
Of Iterations)
takes a modified right hand side). In two-grid case, the number of intervalson the coarse grid is in-
creasing each time we increase the number of intervals on the fine grid. Therefore, if we use either the
stopping criteria with the difference between iterations or the stopping criteriabases upon the absolute
residual, the coarse grid solver may not be exact, and so the overall solution will fail to converge when
the number of intervals gets large. Table 3.4 shows the results of the non-linear Two-grid solver. Note
the outputs again only includes the solution at the centre of the fine grid, and this output is desired to
be 1. With a fixed tolerance 10−6 and a forced number of coarse grid iterations, it is clear that as the
Intervals on Finest Grid Number of Iterations Forced Outputs V-cycle8 50 1.0157789073548067 416 200 1.0039074181147072 532 800 1.0009745660061042 564 3200 1.0002434991011684 5128 12800 1.0000608658833303 5256 51200 1.0000152159130580 5512 204800 1.0000038039412853 51024 819200 1.0000009509804590 52048 3276800 1.0000002377374182 54096 13107200 1.0000000594040492 5
Table 3.4: Non-linear Two-grid Solver (Two Stopping Criteria)
36
density of the fine grid increases, the V-cycles taken by this non-linear Two-grid solver stay constant and
the error approximately reduces by a fact of four each time. This is consistent with the linear Two-grid
solver and the use of the second order central finite difference approximation.
3.4.3 Non-linear Multigrid Solver
In this section, the non-linear Multigrid solver is applied to two different problems. These are: first of
all, the 1-D linear model boundary value problem of the previous chapter;secondly, the 1-D non-linear
model boundary value problem that we used in this chapter. The linear model boundary value problem
is the equation 2.6, and table 3.5 shows the results of using the non-linear Multigrid solver. In each run
the coarse grid contains just four intervals. A comparison with the linear Multigrid solver is also given
in table 3.5, and here we are expecting the results are approximately same. Note the stopping criteria
for both linear and non-linear Multigrid solver is just the difference between iterations with a same fixed
tolerance 10−6, and the outputs for both solvers are the centre value on the finest grid, which is again
desired to be 1. It is clear from table 3.5 that both linear and non-linear Multigrid solvers obtain
Finest Grid Linear Multigrid Outputs V-cycle Non-linear Multigrid Outputs V-cycle8 1.0129507554610295 4 1.0129507554610295 416 1.0032189758355370 5 1.0032189758355377 532 1.0008036061669401 5 1.0008036061669399 564 1.0002008293377787 6 1.0002008293377782 6128 1.0000502079775473 6 1.0000502079775584 6256 1.0000125568287128 6 1.0000125568287286 6512 1.0000031443732953 6 1.0000031443732869 61024 1.0000007912695414 6 1.0000007912695053 6
Table 3.5: Linear And Non-linear Multigrid Solvers (Stopping Criteria: Difference Between Iterations,Tolerance: 10−6.)
an approximately equal solution to the same linear model boundary value problem, and the number of
V-cycles behaves the same. Again, the convergence is assumed when theinfinity norm of the difference
between solutions after consecutive V-cycle is less than a prescribed tolerance (10−6 in this case).
Now, we move on to the non-linear model boundary value problem (equation3.5), table 3.6 shows
the number of V-cycle of the non-linear Multigrid solver and the CPU time taken toconverge. Note the
stopping criteria is still the difference between iterations with the same fixed tolerance 10−6.
It can be seen from table 3.6 that with increasing the number of grids as wellas doubling the number
of intervals on the finest grid, the non-linear Multigrid solver behaves similarly to the linear one. That
37
Finest Grid V-cycles CPU Time (seconds)
1024 6 0.005
2048 6 0.013
4096 6 0.020
8192 6 0.042
16384 6 0.079
32768 6 0.161
65536 6 0.318
131072 6 0.644
262144 6 1.290
524288 6 2.595
1048576 6 5.193
Table 3.6: Non-linear Multigrid Solver (Stopping Criteria: Difference Between Iterations, Tolerance:
10−6.)
is with a fixed tolerance 10−6, not only does the number of V-cycles stay constant, but also the CPU
time used increases linearly. Therefore, it verifies the non-linear Multigridmethod, for 1-D non-linear
boundary value problem, runs inO(n).
3.4.4 Non-linear Multigrid Solver On A More General Problem
Having tested the non-linear Two-grid and the non-linear Multigrid solvers on the non-linear model
boundary value problem (equation 3.5), a more general non-linear boundary value problem is imple-
mented and evaluated in this section. This more general problem takes the form:
d2udx2 +up = f (xi), (3.9)
with the same domain and boundary conditions as before. Note,p is any positive integer andf (x) =
sin(πx)p−π2sin(πx). Applying the central finite differences gives the following equation at each inter-
nal node:
ui−1−2ui +ui+1
dx2 +upi = sin(πxi)
p−π2sin(πxi), (3.10)
38
Note the Newton-like Gauss-Seidel iteration can still be applied in the usual way (with respect of
the change ofp), andp again is any positive integer. Then, table 3.7 shows the results of the non-linear
Multigrid solver with p equals to 1 to 5 respectively.
Finest Grid p = 1 V-cycles p = 2 V-cycles p = 3 V-cycles p = 4 V-cycles p = 5 V-cycles
8 4 4 4 5 5
16 5 5 5 5 5
32 5 5 5 5 6
64 6 6 6 6 6
128 6 6 6 6 6
256 6 6 6 6 6
512 6 6 6 6 6
1024 6 6 6 6 6
2048 6 6 6 6 6
Table 3.7: Non-linear Multigrid Solver (Stopping Criteria: Difference Between Iterations, Tolerance:
10−6.)
Table 3.7 demonstrates that even when the non-linear Multigrid solver applieson this more general
non-linear boundary value problem, it still has the mesh independent property when the density and the
levels increase each time. However, for the first few cases, there are asmall increase on the number of
V-cycles. This is believed that when thep increases, the problem gets harder to solve. Therefore, the
non-linear Multigrid solver needs an extra V-cycle to satisfy the stopping criteria.
3.5 FMG and W-cycle
The Full Multigrid (FMG) technique is a combination of the original V-cycle (orW-cycle) and a coarse
grid solver. This coarse grid solver was mentioned by Brandt in his 1977 paper, that it had been invented
and used for years. By my research, the earliest origin was from Southwell [10] in 1946. The idea is
starting on the coarsest grid, we can obtain a coarse grid solution by usingthe coarse grid solver that we
have used before in non-linear Two-grid and non-linear Multigrid solvers. Then by interpolating this
solution back to the finest grid, it gives us a much better initial guess.
Later on, this use of grids is combined with the Multigrid V-cycle (either linear or non-linear solvers,
39
this depends upon the nature of the problem itself). So it not only interpolates the coarse grid solution
but also runs V-cycles after each interpolation to further improve the solution. Therefore, even before the
actual linear or non-linear Multigrid solver runs, the initial guess is not a random number but a corrected
solution. This improved initial guess could then reduce the running time of the original Multigrid solver
and hopefully in some case, it can achieve a large reduction on the number of V-cycles which are needed
by the single linear or non-linear Multigrid solver. Also, this better initial guessfrom FMG can make
a difference between convergence and non-convergence of the non-linear Multigrid solver from some
non-linear problems (this feature can be demonstrated when FMG is applied tosome hard non-linear
problems). Figure 3.1 shows an example that the FMG applies on four grids.
Figure 3.1: FMG On Four Levels Of Girds
From figure 3.1, the FMG first of all, makes an initial guess on the coarsestgrid, and applies a coarse
grid solver (non-linear Gauss-Seidel iteration in non-linear case) to obtain the value ofu on this grid.
Then it interpolates this value ofu to the next finer level (which in figure 3.1, is called coarse level),
and one Multigrid V-cycle is applied. This process runs one V-cycle Multigrid method after each new
interpolation, until the finest grid is reached (Note V-cycle does not applyafter the final interpolation).
Therefore, the initial guess on the finest grid now is the solution after several V-cycles. Hence a linear
40
or non-linear Multigrid solver now starts, it can benefit from this better initialguess.
Evaluation is done by combining the FMG and the non-linear Multigrid solver, table 3.8 in the
next section demonstrates its advantage. The FMG technique is also a startingpoint of later adaptive
Multigrid method, this is discussed in detail in the next chapter.
W-cycle is a different strategy of how the Multigrid solver runs between grids. Similar to the V-
cycle, it starts from the finest grid and ends on the finest gird, it also hasthe coarse grid solver. Its
restriction and interpolation uses the same algorithms as the V-cycle. The difference is the time that
restrictions and interpolateds are called. Figure 3.2 illustrates one standardW2-cycle (the meaning of
W2 is explained later in this section).
Figure 3.2: One W2-Cycle
It is clear that from figure 3.2, until the first coarse grid solver is called,the restrictions are the same
as the V-cycle. However, when the first interpolation is applied from the coarsest gird, after each new
interpolation (include the first interpolation as well), the W-cycle runs a second coarse grid correction
from the current interpolated grid. W2, in this case indicates that we run one additional coarse grid
correction after each new interpolation, so W1 is the V-cycle, and W3 runstwo additional coarse grid
correction after each new interpolation, etc.
The improvement of the W-cycle is the increased number of coarse grid solvers applied in one cycle,
along with a number of many more pre-smoothers and post-smoothers in each grid level. Therefore, the
41
advantage of this W-cycle clearly is that by applying more coarsest grid solvers, the problem should
converge faster. On the other hand, the trade-off of the W-cycle is the efficiency. Since more number
of pre and post-smoothers and the coarsest grid solvers, it involves many more calculations and takes
much more time to run one W-cycle.
The choice of the V-cycle and the W-cycle depends upon the nature of theproblem that we are trying
to solve. For example, if the problem is easy, V-cycle strategy will satisfy theneeds of both accuracy and
efficiency. However, if the problem is highly non-linear, by applying manymore coarsest grid solver
will reduce the error significantly, then W-cycle may be a better choice. Evaluation is done by using the
W-cycle on a more general 1-D non-linear model boundary value problem in the next section.
3.6 Further Software Development
Appendix K shows the code of FMG, since the FMG is not a complete solver, itis combined with the
standard non-linear Multigrid solver. FMG occurs in the ”main” function, and since it is not a repeatable
process, it has no parent loop. Appendix L indicates the W-cycle. This code is a modified version of the
non-linear Multigrid solver (which runs the V-cycle). W-cycle strategy is achieved in the ”Multigrid”
function, which uses a variable “w” to control the additional loop. Appendix M shows the W-cycle
solver on a more general problem, such code is a modified version of the appendix J, that is W-cycle
strategy included to replace the V-cycle. Note the second tolerance is usedin all cases so we can obtain
an ”exact” solution on the coarsest grid.
3.7 Further Evaluation
3.7.1 FMG With Non-linear Multigrid Solver
This section evaluates FMG with the non-linear Multigrid solver on the model boundary value problem
(equation 3.5). The stopping criteria is the difference between iterations witha fixed tolerance 10−12.
Table 3.8 shows the results of applying FMG to the standard non-linear Multigrid solver. Note the
outputs are still only the centre number of that grid, which again is desired to be 1.
It is clear that the FMG with non-linear Multigrid solver achieves convergence on this problem (con-
vergence again is assumed when the infinity norm of the difference between solutions after consecutive
V-cycle is less than a prescribed tolerance), and error reduces approximately by a factor of four each
42
Finest Grid Outputs V-cycles
8 1.0157587795680718 2
16 1.0038841505817695 2
32 1.0009643021841306 2
64 1.0002404608395759 2
128 1.0000600638281758 2
256 1.0000150113909536 2
512 1.0000037523998757 2
1024 1.0000009380514894 2
Table 3.8: FMG With Non-linear Multigrid Solver (Stopping Criteria: Difference Between Iterations,
Tolerance: 10−12.)
time. Furthermore, there is a noticeable observation, that is the smaller number of the V-cycles while
the number of grid level is increasing. This is caused by the FMG, when FMGruns on a few level of
grids, since there are not many V-cycles and Newton-like Gauss-Seideliterations involved, the error
in the initial guess which is provided by the FMG is still large, and therefore thenon-linear Multigrid
solver need to run for several V-cycles to satisfy the stopping criteria. However, when the level of grids
increases to become sufficient large, and since this model boundary value problem is relatively easy to
solve, the V-cycles and the Newton-like Gauss-Seidel in the FMG processalready solves the problem,
then the initial guess is becoming a good solution. So overall the non-linear Multigrid solver with FMG
does not need to run as many V-cycles as the single non-linear Multigrid solver to satisfy the stopping
criteria (even the stopping criteria used by the FMG is much smaller than the singlenon-linear Multigrid
solver).
3.7.2 W-cycle On A Simplified Non-linear Problem
In this section, the standard non-linear Multigrid solver takes W-cycle strategy, and is applied on the
simple non-linear model boundary value problem (equation 3.5). The stopping criteria is the difference
between iterations with a fixed tolerance 10−6. Table 3.9 shows the results of a comparison between
V-cycle and W1 (since we already discussed that W1 is the V-cycle itself, so here we are expecting the
solutions are identical). Note the outputs are still the centre number on that grid, which is desired to be
43
1.
Finest Grid V-cycle Outputs V-cycles W1-cycle Outputs W1-cycles
8 1.0157789143876204 4 1.0157789143876204 4
16 1.0039074315856247 5 1.0039074315856247 5
32 1.0009745994810400 5 1.0009745994810400 5
64 1.0002435092972004 6 1.0002435092972004 6
128 1.0000608755138762 6 1.0000608755138762 6
256 1.0000152253495023 6 1.0000152253495023 6
512 1.0000038134124574 6 1.0000038134124574 6
1024 1.0000009604432261 6 1.0000009604432261 6
2048 1.0000002471759499 6 1.0000002471759499 6
Table 3.9: Comparison Between V-cycle and W1-cycle (Stopping Criteria: Difference Between Itera-
tions, Tolerance: 10−6.)
It is clear from table 3.9 that the V-cycle and the W1-cycle produces the same solution to the same
problem, under the same stopping criteria.
Then another evaluation is done by applying the W2, W3 and W3 cycle respectively to the same
problem. Again the stopping criteria is the difference between iterations with thesame fixed tolerance
10−6. Table 3.10 shows the number of cycle each of them taken while increasing on the number of
levels of grids.
From table 3.10, we can see that all three W-cycle has the same mesh independent property as the V-
cycle, and the number of cycles are approximately same. The only one difference is that when applying
W4-cycle to the problem which the finest grid is eight, the number of cycles is one less than others.
The reason for this is W4-cycle takes many more coarse gird solvers and the Newton-like Gauss-Seidel
iterations than W2 and W3-cycle. Since the problem is relatively easy to solve, W4-cycle only runs two
time to satisfy the stopping criteria.
3.7.3 W-cycle On A More General Non-linear Problem
Having evaluated W-cycle on the simple non-linear model problem, we now apply the standard non-
linear Multigrid solver which takes W-cycle strategy on a more general non-linear problem (equation
44
Finest Grid W2-cycles W3-cycles W4-cycles
8 3 3 2
16 5 5 5
32 5 5 5
64 5 5 5
128 5 5 5
256 5 5 5
512 5 5 5
1024 5 5 5
2048 5 5 5
Table 3.10: W2, W3 and W3 cycles (Stopping Criteria: Difference BetweenIterations, Tolerance:
10−6.)
3.9). However, before this, a comparison is given between V-cycle andW2-cycle on the number of
cycles and CPU time whenp= 2 (which the problem now is equation 3.5). The reason of only using W2-
cycle is that W2-cycle has been commonly used as W-cycle, and we have seen from previous section,
W3-cycle and W4-cycle has little difference with W2-cycle on this model boundary value problem.
Table 3.11 gives this comparison. Note the stopping criteria is still the difference between iterations with
a smaller fixed tolerance 10−12. The reason for using this much smaller tolerance is the old tolerance
10−6 can not correctly terminate the process, since the solution is even smaller thanthe tolerance in the
last few cases.
From table 3.11 we can see that W2-cycle takes much longer to converge than the V-cycle. On the
other hand, the number of W2-cycles only stays eight, while V-cycle needsten cycles to satisfy the small
stopping criteria. Since the tolerance changes to a much smaller one, then the increase on the number of
cycles, compared with the previous non-linear Multigrid solver, is reasonable, and the number of cycles
are still mesh independent.
Then the more general problem is applied (equation 3.9) withp = 3. Table 3.12 shows the results
of both V-cycle and W2-cycle. The stopping criteria is still not changed in this case, and the tolerance
changes to 10−12 with the same reason as before.
From table 3.12, it is clear that W2-cycle still take longer to converge than theV-cycle. On the other
45
Finest Grid V-cycles CPU Time(second) W2-cycles CPU Time(second)
16 9 0.001 8 0.000
32 10 0.001 8 0.001
64 10 0.001 8 0.001
128 10 0.001 8 0.001
256 10 0.001 8 0.003
512 10 0.002 8 0.004
1024 10 0.002 8 0.012
2048 10 0.007 8 0.022
4096 10 0.011 8 0.050
8192 10 0.028 8 0.110
16384 10 0.054 8 0.279
Table 3.11: Comparison Between V-cycle and W2-cycle Whenp = 2 (Stopping Criteria: Difference
Between Iterations, Tolerance: 10−12.)
Finest Grid V-cycles CPU Time(second) W2-cycles CPU Time(second)
16 9 0.000 8 0.000
32 10 0.001 8 0.002
64 10 0.002 8 0.002
128 10 0.003 8 0.006
256 10 0.004 8 0.011
512 10 0.009 8 0.026
1024 10 0.017 8 0.053
2048 10 0.034 8 0.118
4096 10 0.063 8 0.253
8192 10 0.127 8 0.547
16384 10 0.250 8 1.467
Table 3.12: Comparison Between V-cycle and W2-cycle Whenp = 3 (Stopping Criteria: Difference
Between Iterations, Tolerance: 10−12.)
46
hand, W2-cycle still requires two less cycles than the V-cycle. This again verifies what we discussed
previously, that was W-cycle included more calculations, so it required less cycles to converge, however,
a number of these calculations were not necessary and did not improve thesolution very much, therefore
W-cycle, compared to V-cycle took longer to converge.
3.8 Discussion
This chapter shows in detail the non-linear Multigrid method and its implementations,as well as the
Newton-like Gauss-Seidel iteration. The Newton-like Gauss-Seidel was first shown and evaluated with a
new stopping criteria, that was forcing the iterative method to run a per-determined number of iterations
in order to guarantee the convergence. Then a non-linear Two-grid solver was introduced with two new
termsu∗ and f c which stored the restricted value ofu from fine grid and the modified right hand side
respectively. By using the Newton-like Gauss-Seidel iteration, the Two-grid has successfully solved the
non-linear model boundary value problem. Then a non-linear Multigrid solver was straightforwardly
built from the Two-grid solver, which has a recursion approach. Evaluations were done by applying this
non-linear Multigrid solver to one simple and one more general model boundary value problem. Later
on I also showed the FMG and the W-cycle, and their applications. The FMG approach has successfully
improved the non-linear Multigrid solver by giving a much better initial guess. The advantage and
disadvantage of W-cycle were also evaluated and shown by applying it to both simple and more general
model boundary value problem. However, the choice of either V-cycle orW-cycle still depends upon the
problem itself. Having the FMG approach implemented, now we can apply adaptivity to our non-linear
Multigrid solver.
47
Chapter 4
Adaptive Multigrid
This chapter describes the third concept of Brandt’s 1977 paper, this isthe Multi-Level Adaptive Tech-
nique (MLAT) with the FAS algorithm. We have discussed the FAS algorithm in detail in the previous
chapter. Now we are applying adaptivity to our non-linear Multigrid method. The first section describes
the general idea of the adaptive Multigrid method. Section 4.2 demonstrates theimplementation of the
adaptive Multigrid solver which is extended from the non-linear Multigrid solver. Section 4.3 gives a
general view of how my program is structured. Section 4.4 evaluates the adaptive Multigrid solver on an
1-D non-linear model boundary value problem. Finally, section 4.5 gives aconclusion of this chapter.
4.1 Description of Adaptive Multigrid Method
Due to the nature of some boundary value problems, the dependent variables may be changing signifi-
cantly only in a small local region, consequently we are more interested in this small region relative to
the whole domain. Therefore, taking expensive computations of the full problem by using a very fine
grid everywhere seems unnecessary and inefficient. However, to allowgrids to become sufficiently fine
where needed during the solution process, an important feature is therefore needed in the method. This
is, adaptivity. In Brandt’s paper, the Multi-Level Adaptive Technique (MLAT) is proposed to satisfy this
requirement [2]. An important character of MLAT is that it allows the Multigridsolver to take account
48
of the fact that the fine grid dose not cover the same domain as the coarse grid.
This technique still uses FAS as the standard non-linear Multigrid method, andthe finite difference
method (FDM) may be used for discretization in the traditional fashion, as explained in previous section.
However, Brandt states the finite element method (FEM) is an alternative discretization scheme, but this
project will not consider the FEM.
For a general boundary value problem (both equation 2.1 and equation 3.1), MLAT solves on the
fine grid in the usual way. The fine grid (or the finest grid if there are morethan two grids) applies
FDM to obtain a discretization of the problem which covers the region of the domain with this level of
refinement, and solves the values of each internal nodes by an iterative method. Figure 4.1 illustrates
the use of MLAT on a fine grid with 3 internal nodes at the finest level.
Figure 4.1: MLAT on the fine grid (From Temporary Left Boundary To Right Boundary)
Now having the fine grid solved, the FAS update is then applied on the coarsegrid. An example is
shown in Figure 4.2.
Figure 4.2: MLAT on the coarse grid
The coarse grid in Figure 4.2 takes the domain boundary as the left boundary point and the right
boundary point remains unchanged. Now the problem is solved on the whole of the original domain
with a modified right-hand side. The modified right-hand side takes into account the fine grid solution
on the refined part of the domain by modifying the coarse grid right-hand side for points in this region.
Then the solution on the coarse grid can be calculated, and the approximatederror is interpolated back to
the fine grid in the usual manner. Note that this will update the temporary left-hand boundary condition
49
shown in Figure 4.1, as well as the internal nodes on the fine grid. The post-smoothing then takes place
on the fine grid in the usual manner.
The advantage of this method is obvious. By defining new boundary point(s)for the finer grids, the
amount of computation is reduced significantly. The MLAT is proved in [2], tomaintain its efficiency
when applying on non-uniform grids and with non-linear problems. A realapplication of this method is
in [6]. It achieves a fully adaptive use in time and space discretizations.
There is another adaptive Multigrid method, called the Fast Adaptive Composite Grid Method (FAC)
[13]. It contains a number of additional terms, for example: the composite grid where the solution is
approximated; the border points which lie outside the local fine grid, are used temporarily to develop
special interface stencils; etc. The FAC method, in comparison with MLAT, can effectively solve the
problem using global grids but local relaxation. However, it only operates on sequential uniform grids.
This project will only focus on using MLAT as introduced previously, even though there are other
adaptive methods, such as FAC. More application examples can be found in[12].
4.2 Implementation
The implementation of MLAT has only been done on the adaptive Multigrid solver. The reason for this
is, first of all the adaptive Multigrid solver as mentioned previously still usesthe Newton-like Gauss-
Seidel iteration as the solution method (used as the pre-smoother, post-smoother and within the coarse
grid solver) which is described in detail in the previous chapter. Secondly, the adaptivity has only been
applied on the finest grid so far (only one grid), and therefore we can go straight to the Multigrid solver.
This adaptive Multigrid solver is built from the non-linear Multigrid solver with FMG. The modifi-
cations are that we now define the left and right temporary boundary point on the finest grid, to reflect
local refinement on this grid. Since we are taking the same simple non-linear model boundary value
problem from the previous chapter (equation 3.5), the refined region onthe finest grid is different from
the example and figures in the previous section. Now refinement takes placein the centre of the domain:
figure 4.3 shows an example of MLAT on three levels of grids. Note when FMG applies before the
MLAT, all grids prior to the finest level are as uniform grids in this case.
After FMG is applied, the finest grid (the one that is adapted in figure 4.3) has the initial guess
interpolated from the FMG approach. Then MLAT runs on such grid. First of all, two Newton-like
Gauss-Seidel iterations are applied only for the internal nodes (excludethe left and right temporary
50
Figure 4.3: An Example of MLAT On Three Levels of Grids
boundary points). Then the residual is calculated in the usual fashion for these points (however, in
my programme, the residual is calculated for the whole domain for programming convenience). Then
restrictions from residualr f and value ofuf to rc andu∗ are carried out respectively (r f denote the
residual on the fine grid,rc denotes the residual on the coarse grid,uf denotes the actual value on the
fine grid andu∗ denotes the intermediate value which is described in the previous chapter). Note during
the restriction ofuf to u∗, full-weighting is only applied for the internal nodes (exclude the left and right
temporary boundary points on both grids). Then the modified right hand side is calculated from the
internal nodes (nodes in the intersection in figure 4.3, exclude left and right temporary boundary points).
For other nodes, they take the original right-hand side value (some of themneed to be calculated since
the finest grid does not contain them.).
This process runs recursively until the coarsest grid is reached (but it is not applied on the coarsest
gird). Then the coarse grid solver from the non-linear Multigrid solver in the previous chapter solves
the coarse grid problem “exactly” for the whole domain. Then the error term e can be calculated in the
same manner but only for the internal nodes and the left and right temporary boundary points (this time
include the temporary boundary points is because we can update them later on). Then the interpolation
and correction can be done in the usual way. Note since we now only havethe error value of the nodes
51
in the intersection region (include the temporary boundary points), the correction can only be done for
these nodes. This is also a recursive process as the non-linear Multigridsolver, it runs until the finest
grid is reached (interpolation and correction will be applied on the finest grid). This is one MLAT V-
cycle, and such adaptive Multigrid solver is tested in the evaluation section onthe simple non-linear
model boundary value problem that we used in the previous chapter.
4.3 Software
Appendix N shows the code of the adaptive Multigrid solver. This code is a modified version of the
non-linear Multigrid solver, which includes an additional calculation to obtain the temporary boundary
points on the finest grid. Therefore, most of the functions may not start at node 1 and may not end at
nodeN−1 (N denotes the number of intervals). Note the second tolerance is also used here to force
the coarse grid solver to obtain an ”exact” solution. “MyLib.h” is a library filecontains all the functions
from the original non-linear Multigrid solver in the previous chapter.
4.4 Evaluation
The evaluation of the adaptive Multigrid solver is done on the simple non-linearmodel boundary value
problem (equation 3.5). The stopping criteria is the difference between iterations with a fixed tolerance
10−12. Table 4.1 shows the results of using the adaptive Multigrid solver on this model problem. Note
the outputs are just the centre number on that grid, which is desired to be 1.
It is clear from table 4.1 that the adaptive Multigrid solver achieves the convergence on this model
problem. Note the convergence here is assumed when the infinity norm of thedifference between solu-
tions after consecutive V-cycles is less than a prescribed tolerance. The error reduces by approximately
a fact of four. Furthermore, since the MLAT include the FMG approach which is discussed in the pre-
vious chapter, it also shows here the FMG property. That is by increasing the level of grids, the FMG
becoming not only to provide a better initial guess, but also the initial guess that it provides is a good
solution to this simple model problem. Another important feature is that the MLAT takes less time
to converge when compares to table 3.11 in the previous chapter (the V-cycle, W2-cycle and current
adaptive Multigrid solver solves the same 1-D non-linear model boundary value problem with the same
stopping criteria). This result verifies that when the adaptivity and the FMGcombined and achieved, it
takes a remarkable advantage over the single non-linear Multigrid solver (either V-cycle or W-cycle) in
52
Finest Grid Outputs V-cycles CPU Time(seconds)
16 1.0039241676006814 7 0.004
32 1.0009463703337964 8 0.004
64 1.0002305849919699 8 0.004
128 1.0000577198995513 7 0.006
256 1.0000144243864115 7 0.006
512 1.0000036045836813 7 0.007
1024 1.0000009009091779 7 0.010
2048 1.0000002251957121 6 0.016
4096 1.0000000562949711 6 0.017
8192 1.0000000140736196 7 0.030
16384 1.0000000035173624 5 0.044
Table 4.1: Adaptive Multigrid Solver (Stopping Criteria: Difference Between Iterations, Tolerance:
10−12.)
term of efficiency.
4.5 Discussion
This chapter describes the adaptive Multigrid method, which is the standard non-linear Multigrid solver
with an adaptivity on the finest grid. It uses the original Newton-like Gauss-Seidel iteration as the solu-
tion method, and constructed in such way that is very similar to the standard non-linear Multigrid solver.
Even so far we have only seen the adaptivity on the finest grid (only one grid), from the evaluation, such
adaptive Multigrid solver achieves a remarkable improvement in term of efficiency, compares to the
standard non-linear Multigrid solver. Therefore it proves the choice ofthe Multigrid with adaptivity is
correct.
53
Chapter 5
Conclusion
Having seen all three concepts of the Brandt’s 1997 paper, to summarize this report, section 5.1 gives
a conclusion on the whole project. Section 5.2 discusses one possible extension of this project, that is
the Multigrid method in 2-D. Section 5.3 illustrates another possible extension which is a more general
adaptive Multigrid solver.
5.1 Discussion
In conclusion, this project has successfully demonstrated all three concept of the Multigrid method that
was proposed and discussed in Brandt’s 1977 paper [3]. Additionally,some basic knowledge before we
start learning the Multigrid method and a number of related techniques to the Multigrid method have
also been included in this project.
First of all, we described the reason for using the Multigrid method, that is to solve boundary value
problems efficiently. Such problems were also the reason that Brandt andothers invented and developed
the Multigrid method. Then, in order to solve this problem, a discretization schemeand a solution
method are needed by the Multigrid method. In this project, we used the centralfinite difference method
as the discretization scheme and the Gauss-Seidel iterative solver as the underlying solution method
(however, some modifications of the standard Gauss-Seidel method were made in order to solve non-
54
linear problems), and some alternatives were discussed. Then we moved towards the Multigrid method
itself. Brandt’s 1977 paper is described in detail and some recent examples of further research using
the Multigrid method were given. These examples showed that the Multigrid method can solve other
problems rather than the two points boundary value problem which was the only class of problem solved
in this project.
Secondly, we moved into the implementation of the Multigrid method in detail. The first imple-
mentation was the standard Multigrid solver that solves a 1-D linear model boundary value problem,
which is the minimum requirement of this project. This implementation was done in a incremental way,
that was the single Gauss-Seidel iterative solver was firstly implemented, thenthe Two-grid solver was
introduced which used the Gauss-Seidel solver and some additional calculations to enable the solver
to move between two grids. Finally, the Multigrid solver was built from the Two-grid solver, which
allowed the solver to use multiple grids in order to solve the problem. Evaluation was undertaken on
two different problems to show the successful implementation and the optimal results.
Then, another implementation of the Multigrid solver was done in order to solvera 1-D non-linear
model boundary value problem. Such implementation followed the same incremental methodology that
was used in the linear case. The single Newton-like Gauss-Seidel iterativesolver was firstly introduced.
Then a Two-grid solver used this solver to solve the problem on two grids. Finally, the Multigrid solver
was built on the Two-grid solver to achieve solving the problem by using multiplegrids. Similarly,
evaluation was undertaken after the implementation, and optimal results were obtained. At this point,
the project has exceeded the minimum requirement. The next step was to introduce two techniques that
are used currently in the development of the Multigrid method. They are FMG and W-cycle. These
two techniques were mentioned as possible extensions in the initial project plan, and now they had been
discussed and implemented in this project. The combination of the FMG and the non-linear Multigrid
solver has improved the single non-linear Multigrid solver by giving a better initial guess. W-cycle has
been tested on both the original and a more general problems. However, the advantage of W-cycle may
not occur for every problem.
Finally, the adaptive Multigrid method was demonstrated in detail. From the evaluation, we can
seen that even the adaptivity was applied only on one grid, it has further improved our solution in terms
of speed, while the accuracy has approximately remained, in the refined region at least. It should be
noted that the full generality of the FAS/MLAT combination has not been explored due to lack of time
within this project.
55
The memory allocation is a known problem in this project. Since the structure of the Multigrid
solver, and the recursion approach, it is difficult to free all the memory allocations when one V-cycle
is finished. When the recursion approach calls the allocation of the same array (but on different grid),
the memory is then allocated under the same name. Therefore, to fully free all the allocated memory,
the program needs to terminate. On the other hand, we believe, even the memory allocations are using
the same name for one array, C programming language takes care of re-allocating this piece of memory
instead of replacing it with new values. Therefore, the Multigrid solvers can easily run on a fine grid with
more than one million nodes. However, since this problem has not been fully solved, it is noteworthy to
mention here.
5.2 Extension To 2-D
This section discusses the extension to 2-D. In principle, the 2-D Multigrid method is straightforward
once the 1-D Multigrid method is understood, although the programming complexityis clearly greater.
Therefore, this section indicates the main differences between 1-D and 2-D. Section 5.2.1 describes a
general structure of the 2-D model boundary value problem. Section 5.2.2indicates the additional terms
in the discretization scheme. Section 5.2.3 describes the differences in the solution method. Section
5.2.4 gives some detail in the 2-D Multigrid solver.
5.2.1 2-D Boundary Value Problem
The 2-D boundary value problems (BVPs) have the same characters as the 1-D BVPs, however we are
now focusing on boundary value partial differential equations (PDEs). Recalling the equation 1.1 in the
first chapter, a general form of a linear, constant coefficient, 2-D BVP would be similar:
a(∂ 2u∂x2 +
∂ 2u∂y2 )+b(
∂u∂x
+∂u∂y
)+cu= f (x,y), (5.1)
for the domain and boundary conditions, we now impose:
xle f t ≤ x≤ xright,
ybottom ≤ y≤ ytop,
u(xle f t) = ule f t,
u(xright) = uright,
56
u(ybottom) = ubottom,
u(ytop) = utop.
Here it is assumed that all coefficients stay constant. Furthermore, since we are interested in the
second order term, we simplify the rest of this discretization by assuming that coefficientsb andc are
zero (the sign ofa is assumed to be the same with equation 5.1). The 2-D boundary value problemnow
is:∂ 2u∂x2 +
∂ 2u∂y2 = f (x,y), (5.2)
Note for simplicity, that the domain is square:(ytop− ybottom) = (xright − xle f t). Another noteworthy
observation is that when only coefficients satisfyb2 − 4ac < 0, the problem is elliptic (b2 − 4ac = 0
indicates the problem is parabolic andb2−4ac> 0 indicates the problem is hyperbolic) [9].
5.2.2 2-D Discretization
Recalling section 1.2 in the first chapter, here we still use the central finite difference method as our
discretization scheme. The problem here will be solved on an uniform grid,this leads todx = dy.
Therefore, after applying this method to the equation 5.2, the discretization now would be the following:
ui, j−1 +ui−1, j −4ui, j +ui+1, j +ui, j+1
dx2 = fi j , (5.3)
for each interior nodei = 1, ...,N−1 and j = 1, ...,N−1, wheredx= dy= xright−xle f t
N , fi = f (xi ,y j). N
is the number of intervals in one row or column in the discretization. Figure 5.1 shows such uniform
grid with one nodeui, j and its four neighbours:
Again, there are other alternative discretization schemes, such as the finiteelement method in 2-D.
However, we are only considering the central finite difference method in this discussion.
5.2.3 2-D Solution Method
Recalling section 1.3 in the first chapter, the Jacobi iterative method and the Gauss Elimination can still
be used in the 2-D case. However, we are only considering the Gauss-Seidel iterative method here.
Since the discretization form is obtained in the equation 5.3 before, the Gauss-Seidel solver is relatively
easy to modify. The following equation shows by using the Gauss-Seidel method to update one internal
node:
uk+1i, j =
uk+1i, j−1 +uk+1
i−1, j +uki+1, j +uk
i, j+1− fi, jdx2
4. (5.4)
57
Figure 5.1: 2-D Uniform Grid On A Square Domain
58
Note in this equation,k denotes the current updating level andi, j = 1, ...,N−1 denotes each internal
node. Then it is clear that this Gauss-Seidel method still uses the current node and its neighbours (four
neighbours in this case) to update this current node.
5.2.4 2-D Multigrid
The 2-D Multigrid method uses the same structure as the 1-D one. However, apart from the Gauss-
Seidel solver, there are still many places that are different with 1-D Multigrid, since in 2-D, we need
to include many more nodes into our calculations each time. Recalling the section 2.1in chapter 2, the
residual equation 2.3 and the error equation 2.4 are still the same, however, now we have four neighbour
nodes rather than just two in 1-D case.
The noteworthy differences are the full weighting operator and the interpolation in 2-D. However,
the interpolation is relatively easy, that is still for the these nodes on the fine grid which have a corre-
sponding node on the coarse grid, it simply takes the value exactly. For these nodes in between, it takes
an average of it two neighbours and for these nodes in the middle of a square, it takes the average of its
four neighbours.
The 2-D full weighing operator, suggested by Briggs in [13], now becomes the following:
ufi, j =
116
[uc2i−1,2 j−1+uc
2i−1,2 j+1+uc2i+1,2 j−1+uc
2i+1,2 j+1+2(uc2i,2 j−1+uc
2i,2 j+1+uc2i−1,2 j +uc
2i+1,2 j)+4uc2i,2 j ]
(5.5)
Note f denotes the node is on the fine grid,c denotes the node is on the coarse grid.i, j = 1, ...,N−1
andN denotes the number of interval in one row or one column. After these changes, the Multigrid
solver can be done in the usual manner with the same recursion process.
5.3 Extension To More General Adaptive Multigrid
In the previous chapter we have discussed the adaptive Multigrid method in general. However, the
implementation only achieved applying adaptivity to just one grid (the finest grid). Then a more general
adaptive Multigrid can be considered as another possible extension to this project, that is the adaptivity
with in multiple regions. Figure 5.2 shows an example of two regions with adaptivity.
It is clear from figure 5.2, now we have two refined regions. The regions do not need to be at
the middle or near the boundary, some general adaptive Multigrid may allow theuser-defined refined
regions. However, there are difficulties of changing the adaptive Multigrid solver in this project to satisfy
59
Figure 5.2: Adaptive Multigrid With Two Refined Regions
this more general requirement. For example, the allocation of data is one of these problems. After the
boundary points and the refined regions are defined (coded in the program or user determined), then
an algorithm is needed to generate a sequence of nodes that will be in the refined regions when the
Multigrid solver moves between grids. One possible solution to this problem is thetree structure data
allocation. In 1-D, every node has at most two neighbours, therefore abinary tree can be used. Then if
in 2-D, it will be a quad-tree and in 3-D will be otc-tree, etc.
60
Bibliography
[1] Marsha J Berger and Joseph Oliger. Adaptive mesh refinement forhyperbolic partial differential
equations.Journal of Computational Physics, 53:484–512, 1984.
[2] A. Brandt. Multi-level adaptive technique (mlat) for fast numerical solution to boundary value
problems.Lecture Notes in Physics, 18:82–89, 1973.
[3] A. Brandt. Multi-level adaptive solutions to boundary-value problems. Mathematics of Computa-
tion, 31:333–390, 1977.
[4] Richard L. Burden and J. Douglas Faires.Numerical Analysis. Brooks/Cole, 2005.
[5] D.J. Sullivan E.L. Briggs and J. Bernholc. Real-space multigrid-based approach to large-scale
electronic structure calculations.PHYSICAL REVIEW B, 45:14362–14375, 1996.
[6] P.K. Jimack J.Rosam and A. Mullis. A fully implicit, fully adaptive time and spacediscretisation
method for phase-field simulation of binary alloy solidification.Journal of Computational Physics,
225:1271–1287, 2007.
[7] Austin M and Chancogne D.Engineering Programming C, MATLAB, JAVA. John Wiley and Sons,
1999.
[8] J.E. Dendy R.E. Alcouffe, Achi Brandt and J.W. Painter. The multi-grid method for the diffusion
equation with strongly discontinuous coefficients.SIAM Journal on Scientific Computing, 2:430–
454, 1981.
[9] G.D. Smith. Numerical Solution of Partial Differential Equations: Finite Difference Methods.
Oxford University Press, 1985.
[10] R.V. Southwell.Relaxation Methods in Theoretical Physics. Clarendon Press, 1946.
61
[11] K.N. Ghia U. Ghia and C.T. Shin. high-re solutions for incompressible flow using the navier-stokes
equations and a multigrid method.Journal of Computational Physics, 48:387–411, 1982.
[12] C.Oosterlee U. Trottenberg and A.Schuller.Multigrid. Academic Press, 2001.
[13] Van Emden Henson William L. Briggs and Steve F. McCormick.A Multigrid Tutorial. Society for
Industrial and Applied Mathematics, 2000.
[14] H.J. Bao X.H. Shi and K. Zhou. Out-of-core multigrid slover for streaming meshes.ACM TRANS-
ACTIONS ON GRAPHICS, 28:173–180, 2009.
62
Appendix A
Personal Reflection
Having finished my project report, I have learned not only from the experience but also have become
confident in doing a large project in the scientific computing area, which I believe will greatly help my
further study in scientific computing.
There are a number of lessons that I would like to share with others that I have learnt during this
project. First of all, as an international student, whose first language is not English, I had many concerns
at the beginning of the project, that I may not be able to write a sixty-page report. After discussion
with my supervisor, I decided to write down the progress while I did the programming. Such decision
helped me to keep recording my achievements and also saved a lot of time when Ibegan to write this
report. Additionally, since I have these clear recordings, I handed in a 24-page middle project report
that described my progress in detail. This middle project report then becamethe first initial draft of the
final report. Therefore, the structure of the report always remains clear.
The second lesson is debugging my programs. Since each program is to show a mathematical
algorithm, this type of programming has an important character. That is, evenwhen the complier does
not show the error message, the results can be totally wrong. That is because some of the equations
or loops have numerical or logical errors rather than script errors. Then the debugging becomes more
important and hard. I have learnt from my supervisor, the debugging process should be step by step
after each calculation. For example, if a Two-grid solver does not outputcorrect results, then we can
check if the Gauss-Seidel iterative on the fine grid output correctly, if so, we then move down to check
63
the residual calculation, etc. Such debugging may take some time, however, itis effective to find the
numerical errors.
Another lesson I learnt was from my assessor. This was to always check the mark scheme or point
lists while I wrote. This helps because when you are trying to write a long section or chapter, during
that writing, you may lose your focus. Then by checking with the mark schemeor the point lists, you
can make sure you have included all the aspects that you wanted to say.
Finally, and one of the most important lesson that I have been practised throughout this project is
the time management. This project requires self-study for most of the time, and since there was a long
period of time before deadline, the time management is even more important. I havelearned that it is
better to have more than one task to do at each period of time, therefore if onetask was stuck before I
met my supervisor, I can move on to another task (some lessons that I mentioned before are also part of
the time management). For example, at most of the time in this project, I have had bothprogramming
and the writing to do.
I have learnt a number of useful lessons during this project, it also gaveme an initial view on what
is coming when I do my PhD course from September, and provides confidence to me for such further
study.
64
Appendix B
Schedule
Schedule of this project is separated into weeks. For these passed weeks, achievements are given, and
for those coming weeks, targets are marked. (Milestones are denoted as footnotes in the table, and
details are given in the following lists.)
Weeks Achievement
1 Single Gauss-Seidel iterative method implemented
2 Two-grid version implemented for a specific example1
3 Literature search
4 Multigrid version implemented for a specific example2
5 Multigrid for general 1-D linear BVPs implemented
6 Mid-Project Report3
7-8 Multigrid with FAS algorithm4
Easter Multigrid with MLAT 5
9 Multigrid with FMG & W-cycle
10-12 Final Report6
Milestones:
1. Working code of Two-grid version on model boundary value problem.
2. Working code of Multigrid version on model boundary value problem.
65
3. Mid-Project report that summarises what have been done so far (includes analysis of results from
two versions of working codes and report on literature search).
4. Working code of Multigrid method with FAS implemented.
5. Working code of Multigrid method with MLAT implemented.
6. Final report that summarises the whole project.
66
Appendix C
Code Of Linear Guass-Seidel Solver
/∗ 1D boundary va lue problem . u = s inP Ix , f ’ ’ =−PI ˆ2 s i n P I x∗∗ Gauss−S e i d e l i t e r a t i o n s∗∗ Tim Yang∗∗ 25 /01 /2010∗ /
# inc lude <s t d l i b . h># inc lude <s t d i o . h># inc lude <math . h>
# d e f i n e Tol 0 .000001# d e f i n e x l e f t 0 . 0# d e f i n e x r i g h t 1 .0
double s i n ( double x ) ;
i n t main ( i n t argc , char∗∗ argdv ){
i n t N;
/∗Raw− i n p u t f rom use r∗ /p r i n t f ( ” P l e a s e g i ve t h e number o f i n t e r v a l s : ” ) ;s c a n f ( ”%d” , &N ) ;
double dx ;double ∗u ;i n t i ;double pre ;double d i f f ;double maxd i f f = 2∗Tol ;
67
double x ;i n t coun t = 0 ;
/∗Memory A l l o c a t i o n∗ /u = c a l l o c ( N+1 , s i z e o f ( double ) ) ;
/∗ C a l c u l a t i n g t h e va lue o f dx , which i s t h e d i f f e r e n c e betweentwo n e i g h b o u r i n gnodes .∗ /
dx = ( double ) ( x r i g h t − x l e f t ) / ( f l o a t ) N;x = dx ;
u [ 0 ] = 0 . 0 ;u [N] = 0 . 0 ;
f o r ( i = 1 ; i < N; i ++ ){
u [ i ] = 0 . 5 ;}
whi le ( maxd i f f > Tol ){
x = dx ;maxd i f f = 0 . 0 ;
f o r ( i = 1 ; i < N; i ++ ){
pre = u [ i ] ;
/∗ Gauss−S e i d e l i t e r a t i o n . I t upda tes one node a t once .∗ /u [ i ] = − 0 .5 ∗ ( dx ∗ dx ∗ − ( M PI ∗ M PI ∗ s i n ( M PI ∗ x ) ) − u [ i −1] − u [
i +1] ) ;d i f f = p re − u [ i ] ;
x = x + dx ;
/∗ The s t o p p i n g c r i t e r i a : d i f f e r e n c e between i t e r a t i o n s .∗ /i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}coun t = coun t + 1 ;
}
p r i n t f ( ” loop runs : %d t imes .\ n” , coun t ) ;p r i n t f ( ” D i f f e r e n c e : %1.16 f\n” , maxd i f f ) ;p r i n t f ( ” a r r a y : \n” ) ;
f o r ( i = 0 ; i <= N; i ++ ){
p r i n t f ( ” %1.16 f\n” , u [ i ] ) ;}/∗ f r e e memory a l l o c a t i o n∗ /
68
/ / f r e e ( u ) ;re turn ( 0 ) ;
}
69
Appendix D
Code Of Linear Two-grid Solver
/∗ 1D boundary va lue problem . u = s inP Ix , f ’ ’ =−PI ˆ2 s i n P I x∗∗ Two−g r i d f u n c t i o n s∗∗ Tim Yang∗∗ 25 /01 /2010∗ /
# inc lude <s t d l i b . h># inc lude <s t d i o . h># inc lude <math . h>
# d e f i n e Tol1 0.000001# d e f i n e Tol2 0.000000001# d e f i n e x l e f t 0 . 0# d e f i n e x r i g h t 1 .0
double s i n ( double x ) ;
/∗ S i n g l e Guass−S e i d e l i t e r a t i o n f o r a l l i n t e r n a l nodes .∗ /double GaussSe ide l (double ∗x , double dx , i n t l e n g t h ){
i n t i ;f o r ( i = 1 ; i < l e n g t h ; i ++ ){
x [ i ] = ( − 0 .5 ∗ ( dx ∗ dx ∗ − ( M PI ∗ M PI ∗ s i n ( M PI ∗ dx ∗ i ) ) − x [ i −1]− x [ i +1] ) ) ;
}re turn ∗x ;
}
/∗ C a l c u l a t i n g t h e r e s i d u a l f o r a l l i n t e r n a l nodes .∗ /double r e s i d u a l ( double ∗x , double dx , double ∗y , i n t l e n g t h )
70
{i n t i ;f o r ( i = 1 ; i < l e n g t h ; i ++ ){
x [ i ] = ( ( − M PI ∗ M PI ∗ s i n ( M PI ∗ i ∗ dx ) ) − ( ( y [ i −1] + (−2 ∗ y [ i ] ) +y [ i +1] ) / ( dx ∗ dx ) ) ) ;
}re turn ∗x ;
}/∗ R e s t r i c t i n g t h e r e s i d u a l t o a coa rse g r i d .∗ /double r e s t r i c t i o n ( double ∗x , double ∗y , i n t l e n g t h ){
i n t i ;i n t h ;f o r ( i = 1 ; i < l e n g t h ; i ++ ){
h = 2 ∗ i ;x [ i ] = ( 0 .25 ∗ y [ h−1] ) + ( 0 .5 ∗ y [ h ] ) + ( 0 .25 ∗ y [ h +1] ) ;
}re turn ∗x ;
}
/∗ Error e q u a t i o n i s s o l v e d on l y on t h e c o a r s e s t g r i d .∗ /double E r r o r E q u a t i o n ( double ∗x , double dx , double ∗y , i n t l e n g t h ){
i n t i ;double d i f f ;double pre ;double maxd i f f = 2 ∗ Tol2 ;f o r ( i = 0 ; i <= l e n g t h ; i ++ ){
x [ i ] = 0 . 0 ;}whi le ( maxd i f f > Tol2 ){
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < l e n g t h ; i ++ ){
pre = x [ i ] ;x [ i ] = ( − 0 .5 ∗ ( dx ∗ dx ∗ y [ i ] − x [ i −1] − x [ i +1] ) ) ;d i f f = p re − x [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}}re turn ∗x ;
}
/∗ I n t e r p o l a t i n g t h e e r r o r back t o a f i n e g r i d .∗ /double i n t e r p o l a t i o n ( double ∗x , double ∗y , i n t l e n g t h ){
71
i n t i ;i n t h ;h = 1 ;f o r ( i = 1 ; i < l e n g t h ; i += 2 ){
x [ i ] = ( y [ h −1] + y [ h ] ) ∗ 0 . 5 ;h = h + 1 ;
}h = 1 ;f o r ( i = 2 ; i < l e n g t h ; i += 2 ){
x [ i ] = y [ h ] ;h = h + 1 ;
}re turn ∗x ;
}
/∗ Update t h e c u r r e n t va l ue w i th t h e i n t e r p o l a t e d e r r o r .∗ /double c o r r e c t i o n ( double ∗x , double ∗y , i n t l e n g t h ){
i n t i ;f o r ( i = 1 ; i < l e n g t h ; i ++ ){
x [ i ] = x [ i ] + y [ i ] ;}re turn ∗x ;
}
i n t main ( i n t argc , char∗∗ argdv ){
/∗ number o f p o i n t s on coa rse and f i n e g r i d .∗ /i n t Nc , Nf ;
/∗Raw− i n p u t f rom use r .∗ // / p r i n t f ( ” P lease g i v e t h e l e f t and r i g h t domain v a l u e s :” ) ;/ / s c a n f ( ”%f%f ” , &x l e f t , &x r i g h t ) ;p r i n t f ( ” P l e a s e g i ve t h e number o f i n t e r v a l s ( c o a r s e g r i d ) ( f i n e g r i d ) : ” ) ;s c a n f ( ”%d%d” , &Nc , &Nf ) ;
double dxf ; /∗ d x f i s dx on f i n e g r i d .∗ /double dxc ; /∗ dxc i s dx on coa rse g r i d .∗ /double ∗ec ; /∗ e r r o r s on coa rse g r i d .∗ /double ∗ e f ; /∗ e r r o r s on f i n e g r i d∗ /double ∗uf ; /∗ v a l u e s o f each p o i n t s on f i n e g r i d .∗ /double ∗ r c ; /∗ r e s i d u a l o f coa rse g r i d .∗ /double ∗ r f ; /∗ r e s i d u a l o f f i n e g r i d .∗ /double ∗Uold ; /∗ r e c o r d s as p r e v i o u s v a l u e s o f u f .∗ /i n t i , j ;double d i f f ;double maxd i f f = 2 .0 ∗ Tol1 ;i n t coun t = 0 ;
/∗Memery a l l o c a t i o n s∗ /ec = c a l l o c ( Nc+1 , s i z e o f ( double ) ) ;u f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;r c = c a l l o c ( Nc+1 , s i z e o f ( double ) ) ;
72
r f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;e f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;Uold= c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;
dx f = ( double ) ( x r i g h t − x l e f t ) / ( f l o a t ) Nf ;dxc = ( double ) ( x r i g h t − x l e f t ) / ( f l o a t ) Nc ;
u f [ 0 ] = 0 . 0 ;u f [ Nf ] = 0 . 0 ;/∗ i n i t i a l guess .∗ /f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = 0 . 5 ;}/∗Two−g r i d loop∗ /whi le ( maxd i f f > Tol1 ){
f o r ( i = 0 ; i <= Nf ; i ++ ){
Uold [ i ] = u f [ i ] ;}
f o r ( j = 0 ; j < 2 ; j ++ ){
GaussSe ide l ( uf , dxf , Nf ) ;}
r e s i d u a l ( r f , dxf , uf , Nf ) ;
r e s t r i c t i o n ( rc , r f , Nc ) ;
E r r o r E q u a t i o n ( ec , dxc , rc , Nc ) ;
i n t e r p o l a t i o n ( ef , ec , Nf ) ;
c o r r e c t i o n ( uf , ef , Nf ) ;
f o r ( j = 0 ; j < 2 ; j ++ ){
GaussSe ide l ( uf , dxf , Nf ) ;}/∗∗ Stopp ing c r i t e r i a .∗ /
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
d i f f = Uold [ i ] − uf [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}
73
coun t = coun t + 1 ;}p r i n t f ( ” Va lues on f i n e g r i d .\ n” ) ;f o r ( i = 0 ; i <=Nf ; i ++ ){
p r i n t f ( ” %1.16 f \n” , u f [ i ] ) ;}p r i n t f ( ” \nMul t i−g r i d runs : %d t imes\n” , coun t ) ;/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( ec ) ;/ / f r e e ( u f ) ;/ / f r e e ( rc ) ;/ / f r e e ( r f ) ;/ / f r e e ( e f ) ;/ / f r e e ( Uold ) ;re turn ( 0 ) ;
}
74
Appendix E
Code Of Linear Multigrid Solver
/∗ 1D boundary va lue problem . u = s inP Ix , f ’ ’ =−PI ˆ2 s i n P I x∗∗ Mul t i −Grid Ve rs i on∗∗ Tim Yang∗∗ 15 /02 /2010∗ /
# inc lude <s t d l i b . h># inc lude <s t d i o . h># inc lude <math . h>
# d e f i n e Tol1 0.000001# d e f i n e Tol2 0.00000001# d e f i n e x l e f t 0 . 0# d e f i n e x r i g h t 1 .0
double s i n ( double x ) ;
/∗ S i n g l e Guass−S e i d e l i t e r a t i o n f o r a l l i n t e r n a l nodes .∗ /double GaussSe ide l (double ∗ f f , double ∗uf , i n t Nf ){
i n t i ;double dxf ;
dx f = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = ( − 0 .5 ∗ ( ( dx f ∗ dxf ∗ f f [ i ] ) − uf [ i −1] − uf [ i +1] ) ) ;}re turn ∗uf ;
}
75
/∗ C a l c u l a t i n g t h e r e s i d u a l f o r a l l i n t e r n a l nodes .∗ /double r e s i d u a l ( double ∗ r f , double ∗uf , double ∗ f f , i n t Nf ){
i n t i ;double dxf ;
dx f = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
f o r ( i = 1 ; i < Nf ; i ++ ){
r f [ i ] = ( f f [ i ] − ( ( u f [ i −1] + (−2 ∗ uf [ i ] ) + u f [ i +1] ) / ( dx f ∗ dxf ) ) ) ;}re turn ∗ r f ;
}
/∗ R e s t r i c t i n g t h e r e s i d u a l t o a coa rse g r i d .∗ /double r e s t r i c t i o n ( double ∗ rc , double ∗ r f , i n t Nf ){
i n t i ;i n t h ;f o r ( i = 1 ; i < Nf ; i ++ ){
h = 2 ∗ i ;r c [ i ] = ( 0 .25 ∗ r f [ h−1] ) + ( 0 .5 ∗ r f [ h ] ) + ( 0 .25 ∗ r f [ h +1] ) ;
}re turn ∗ r c ;
}
/∗ Error e q u a t i o n i s s o l v e d on l y on t h e c o a r s e s t g r i d .∗ /double E r r o r E q u a t i o n ( double ∗ec , double ∗ rc , i n t Nc ){
i n t i ;double dxc ;double d i f f ;double pre ;double maxd i f f = 2 ∗ Tol2 ;
dxc = ( double ) ( x r i g h t − x l e f t ) / ( double ) Nc ;
whi le ( maxd i f f > Tol2 ){
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nc ; i ++ ){
pre = ec [ i ] ;ec [ i ] = ( − 0 .5 ∗ ( dxc ∗ dxc ∗ r c [ i ] − ec [ i −1] − ec [ i +1] ) ) ;d i f f = p re − ec [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
76
}}re turn ∗ec ;
}
/∗ I n t e r p o l a t i n g t h e e r r o r back t o a f i n e g r i d .∗ /double i n t e r p o l a t i o n ( double ∗ef , double ∗ec , i n t Nf ){
i n t i ;i n t h ;h = 1 ;f o r ( i = 1 ; i < Nf ; i += 2 ){
e f [ i ] = ( ec [ h−1] + ec [ h ] ) ∗ 0 . 5 ;h = h + 1 ;
}h = 1 ;f o r ( i = 2 ; i < Nf ; i += 2 ){
e f [ i ] = ec [ h ] ;h = h + 1 ;
}re turn ∗ e f ;
}
/∗ Update t h e c u r r e n t va l ue w i th t h e i n t e r p o l a t e d e r r o r .∗ /double c o r r e c t i o n ( double ∗uf , double ∗ef , i n t Nf ){
i n t i ;f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = u f [ i ] + e f [ i ] ;}re turn ∗uf ;
}
/∗ Th i s M u l t i g r i d f u n c t i o n runs one V−c y c l e∗ /double Mul t iG r i d ( double ∗ f f , double ∗uf , i n t Nf , i n t Nc ){
i n t i ;double ∗ r f ; /∗ r e s i d u a l o f f i n e g r i d .∗ /double ∗ r c ; /∗ r e s i d u a l o f coa rse g r i d .∗ /double ∗ec ; /∗ e r r o r s on coa rse g r i d .∗ /double ∗ e f ; /∗ e r r o r s on f i n e g r i d∗ /
f o r ( i = 0 ; i < 2 ; i ++ ){
GaussSe ide l ( f f , uf , Nf ) ;}
r f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;r e s i d u a l ( r f , uf , f f , Nf ) ;
r c = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;r e s t r i c t i o n ( rc , r f , Nf ) ;
ec = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;
77
ec [ 0 ] = 0 . 0 ;ec [ Nf / 2 ] = 0 . 0 ;f o r ( i = 1 ; i < Nf / 2 + 1 ; i ++ ){
ec [ i ] = 0 . 0 ;}
i f ( Nf < Nc ∗ 2 + 1 ){
E r r o r E q u a t i o n ( ec , rc , Nc ) ;}e l s e{
/∗ Recurs ion approach∗ /Mul t iG r i d ( rc , ec , Nf / 2 , Nc ) ;
}
e f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;i n t e r p o l a t i o n ( ef , ec , Nf ) ;
c o r r e c t i o n ( uf , ef , Nf ) ;
f o r ( i = 0 ; i < 2 ; i ++ ){
GaussSe ide l ( f f , uf , Nf ) ;}/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( r f ) ;/ / f r e e ( rc ) ;/ / f r e e ( ec ) ;/ / f r e e ( e f ) ;re turn ∗uf ;
}
/∗Main∗ /i n t main ( i n t argc , char∗∗ argdv ){
i n t Nc , N, Nf ;i n t i , coun t =0;double ∗uf ; /∗ v a l u e s o f each p o i n t s on f i n e g r i d .∗ /double ∗Uold ; /∗ r e c o r d s as p r e v i o u s v a l u e s o f u f .∗ /double ∗ f f ; /∗RHS∗ /double dxf , x f ;double d i f f ;double maxd i f f = 2 .0 ∗ Tol1 ;
/∗Raw− i n p u t f rom use r .∗ /p r i n t f ( ” P l e a s e g i ve t h e number o f i n t e r v a l s on c o a r s e g r i d : ” ) ;s c a n f ( ”%d” , &Nc ) ;p r i n t f ( ” P l e a s e g i ve t h e number o f g r i d s : ” ) ;s c a n f ( ”%d” , &N ) ;
Nf = Nc ;f o r ( i = 1 ; i < N; i ++ ){
Nf = Nf ∗ 2 ;}
78
dxf = ( double ) ( x r i g h t − x l e f t ) / ( f l o a t ) Nf ;
/∗Memory a l l o c a t i o n o f t h e s e are needed b e f o r e M u l t i g r i d s o l ve r runs∗ /uf = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;Uold= c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;f f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;
f f [ 0 ] = 0 . 0 ;f f [ Nf ] = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
f f [ i ] = ( − M PI ∗ M PI ∗ s i n ( M PI ∗ dxf ∗ i ) ) ;}/∗ i n i t i a l guess∗ /uf [ 0 ] = 0 . 0 ;u f [ Nf ] = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = 0 . 5 ;}/∗ M u l t i g r i d loop ∗ /
whi le ( maxd i f f > Tol1 ){
f o r ( i = 0 ; i <= Nf ; i ++ ){
Uold [ i ] = u f [ i ] ;}
Mul t iG r i d ( f f , uf , Nf , Nc ) ;/∗∗ Stopp ing c r i t e r i a .∗ /
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
d i f f = Uold [ i ] − uf [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}coun t = coun t + 1 ;
}
p r i n t f ( ” Va lues on f i n e g r i d .\ n” ) ;x f = 0 . 0 ;f o r ( i = 0 ; i <=Nf ; i ++ ){
p r i n t f ( ” %1.16 f %1.16 f\n” , u f [ i ] , u f [ i ] − s i n ( M PI∗ x f ) ) ;x f = x f + dxf ;
79
}p r i n t f ( ” \nMul t i−g r i d runs : %d t imes\n” , coun t ) ;p r i n t f ( ”%d\n” , Nf ) ;/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( u f ) ;/ / f r e e ( Uold ) ;/ / f r e e ( f f ) ;re turn ( 0 ) ;
}
80
Appendix F
Code Of Linear Multigrid Solver On AMore General Problem
/∗ 1D boundary va lue problem . u = s inP Ix , w i t h a l l t h r e e c o e f f i ci e n t s .∗∗ Mul t i −Grid Ve rs i on w i th c o e f f i c i e n t s∗∗ Tim Yang∗∗ 15 /02 /2010∗ /
# inc lude <s t d l i b . h># inc lude <s t d i o . h># inc lude <math . h>
# d e f i n e Tol1 0.000001# d e f i n e Tol2 0.00000001# d e f i n e x l e f t 0 . 0# d e f i n e x r i g h t 1 .0/∗ pre−d e f i n e t h e s e t h r e e c o e f f i c i e n t s∗ /# d e f i n e a 1 .0# d e f i n e b 0 .1# d e f i n e c 0 .1
double s i n ( double x ) ;
double cos ( double x ) ;
/∗ S i n g l e Guass−S e i d e l i t e r a t i o n f o r a l l i n t e r n a l nodes .∗ /double GaussSe ide l (double ∗ f f , double ∗uf , i n t Nf ){
i n t i ;double dxf ;
81
dxf = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = ( ( ( a−(b∗0.5∗ dxf ) ) ∗uf [ i −1] + ( a +( b∗0.5∗ dxf ) ) ∗uf [ i +1] − f f [ i ] ∗ dxf ∗dxf ) /(2∗a−c∗dxf ∗dxf ) ) ;
}
re turn ( 0 ) ;}
/∗ C a l c u l a t i n g t h e r e s i d u a l f o r a l l i n t e r n a l nodes .∗ /double r e s i d u a l ( double ∗ r f , double ∗uf , double ∗ f f , i n t Nf ){
i n t i ;double dxf ;
dx f = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
f o r ( i = 1 ; i < Nf ; i ++ ){
r f [ i ] = f f [ i ] − ( a∗ ( u f [ i −1] / ( dx f ∗dxf ) ) − b∗ ( u f [ i −1] / (2∗ dxf ) ) − a∗ ( ( 2∗ uf [ i] ) / ( dx f ∗dxf ) ) + ( c∗uf [ i ] ) + a ∗ ( u f [ i +1] / ( dx f ∗dxf ) ) + b∗ ( u f [ i +1] / (2∗ dxf) ) ) ;
}re turn ( 0 ) ;
}
/∗ R e s t r i c t i n g t h e r e s i d u a l t o a coa rse g r i d .∗ /double r e s t r i c t i o n ( double ∗ rc , double ∗ r f , i n t Nf ){
i n t i ;i n t h ;f o r ( i = 1 ; i < Nf ; i ++ ){
h = 2 ∗ i ;r c [ i ] = ( 0 .25 ∗ r f [ h−1] ) + ( 0 .5 ∗ r f [ h ] ) + ( 0 .25 ∗ r f [ h +1] ) ;
}re turn ( 0 ) ;
}
/∗ Error e q u a t i o n i s s o l v e d on l y on t h e c o a r s e s t g r i d .∗ /double E r r o r E q u a t i o n ( double ∗ec , double ∗ rc , i n t Nc ){
i n t i ;double dxc ;double d i f f ;double pre ;double maxd i f f = 2 ∗ Tol2 ;
dxc = ( double ) ( x r i g h t − x l e f t ) / ( double ) Nc ;
whi le ( maxd i f f > Tol2 ){
maxd i f f = 0 . 0 ;
82
f o r ( i = 1 ; i < Nc ; i ++ ){
pre = ec [ i ] ;ec [ i ] = ( − 0 .5 ∗ ( dxc ∗ dxc ∗ r c [ i ] − ec [ i −1] − ec [ i +1] ) ) ;d i f f = p re − ec [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}}re turn ( 0 ) ;
}
/∗ I n t e r p o l a t i n g t h e e r r o r back t o a f i n e g r i d .∗ /double i n t e r p o l a t i o n ( double ∗ef , double ∗ec , i n t Nf ){
i n t i ;i n t h ;h = 1 ;f o r ( i = 1 ; i < Nf ; i += 2 ){
e f [ i ] = ( ec [ h−1] + ec [ h ] ) ∗ 0 . 5 ;h = h + 1 ;
}h = 1 ;f o r ( i = 2 ; i < Nf ; i += 2 ){
e f [ i ] = ec [ h ] ;h = h + 1 ;
}re turn ( 0 ) ;
}
/∗ Update t h e c u r r e n t va l ue w i th t h e i n t e r p o l a t e d e r r o r .∗ /double c o r r e c t i o n ( double ∗uf , double ∗ef , i n t Nf ){
i n t i ;f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = u f [ i ] + e f [ i ] ;}re turn ( 0 ) ;
}
/ / ∗Th i s M u l t i g r i d f u n c t i o n runs one V−c y c l e∗ /double Mul t iG r i d ( double ∗ f f , double ∗uf , i n t Nf , i n t Nc ){
i n t i ;double ∗ r f ; /∗ r e s i d u a l o f f i n e g r i d .∗ /double ∗ r c ; /∗ r e s i d u a l o f coa rse g r i d .∗ /double ∗ec ; /∗ e r r o r s on coa rse g r i d .∗ /double ∗ e f ; /∗ e r r o r s on f i n e g r i d∗ /
83
f o r ( i = 1 ; i < 2 ; i ++ ){
GaussSe ide l ( f f , uf , Nf ) ;}
r f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;r e s i d u a l ( r f , uf , f f , Nf ) ;
r c = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;r e s t r i c t i o n ( rc , r f , Nf ) ;
ec = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;f o r ( i = 0 ; i <= Nf / 2 + 1 ; i ++ ){
ec [ i ] = 0 . 0 ;}
i f ( Nf < Nc ∗ 2 + 1 ){
E r r o r E q u a t i o n ( ec , rc , Nc ) ;}e l s e{
/∗ Recurs ion approach∗ /Mul t iG r i d ( rc , ec , Nf / 2 , Nc ) ;
}
e f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;i n t e r p o l a t i o n ( ef , ec , Nf ) ;
c o r r e c t i o n ( uf , ef , Nf ) ;
f o r ( i = 1 ; i < 2 ; i ++ ){
GaussSe ide l ( f f , uf , Nf ) ;}/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( r f ) ;/ / f r e e ( rc ) ;/ / f r e e ( ec ) ;/ / f r e e ( e f ) ;re turn ( 0 ) ;
}
i n t main ( i n t argc , char∗∗ argdv ){
i n t Nc , N, Nf ;i n t i ;i n t coun t = 0 ;double ∗uf ; /∗ v a l u e s o f each p o i n t s on f i n e g r i d .∗ /double ∗Uold ; /∗ r e c o r d s as p r e v i o u s v a l u e s o f u f .∗ /double ∗ f f ; /∗RHS∗ /double dxf ;double d i f f ;double maxd i f f = 2 .0 ∗ Tol1 ;
84
/∗Raw− i n p u t f rom use r .∗ /p r i n t f ( ” P l e a s e g i ve t h e number o f i n t e r v a l s on c o a r s e g r i d : ” ) ;s c a n f ( ”%d” , &Nc ) ;p r i n t f ( ” P l e a s e g i ve t h e number o f g r i d s : ” ) ;s c a n f ( ”%d” , &N ) ;
Nf = Nc ;f o r ( i = 1 ; i < N; i ++ ){
Nf = Nf ∗ 2 ;}
dxf = ( double ) ( x r i g h t − x l e f t ) / ( f l o a t ) Nf ;
/∗Memory a l l o c a t i o n o f t h e s e are needed b e f o r e M u l t i g r i d s o l ve r runs∗ /uf = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;Uold= c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;f f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;
/∗RHS∗ /f f [ 0 ] = 0 . 0 ;f f [ Nf ] = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
f f [ i ] = b ∗ ( M PI ∗ cos ( M PI ∗ dxf ∗ i ) ) + c ∗ ( s i n ( M PI ∗ dxf ∗ i ) ) − a∗ ( M PI∗ M PI ∗ s i n ( M PI ∗ dxf ∗ i ) ) ;
}
/∗ I n i t i a l guess∗ /uf [ 0 ] = 0 . 0 ;u f [ Nf ] = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = 0 . 5 ;}
/∗ M u l t i g r i d loop ∗ /whi le ( maxd i f f > Tol1 ){
f o r ( i = 0 ; i <= Nf ; i ++ ){
Uold [ i ] = u f [ i ] ;}
Mul t iG r i d ( f f , uf , Nf , Nc ) ;
/∗∗ Stopp ing c r i t e r i a .∗ /
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
d i f f = Uold [ i ] − uf [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}
85
i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}coun t = coun t + 1 ;
}p r i n t f ( ” %1.16 f\n” , u f [ Nf / 2 ] ) ;
p r i n t f ( ”%d\n” , Nf ) ;p r i n t f ( ” Mul t i −g r i d runs : %d t imes\n” , coun t ) ;/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( u f ) ;/ / f r e e ( Uold ) ;/ / f r e e ( f f ) ;re turn ( 0 ) ;
}
86
Appendix G
Code Of Non-linear Gauss-Seidel Solver
/∗ 1D boundary va lue problem .∗∗ Non− l i n e a r problem d2u / dx2 + u2 = 1∗∗ Tim Yang∗∗ 03 /03 /2010∗ /
# inc lude <s t d l i b . h># inc lude <s t d i o . h># inc lude <math . h>
# d e f i n e Tol 0.000000001# d e f i n e x l e f t 0 . 0# d e f i n e x r i g h t 1 .0
double s i n ( double x ) ;
i n t main ( i n t argc , char∗∗ argdv ){
double dx ;double ∗u ;double pre ;i n t N, i ;double d i f f ;double maxd i f f = 2∗Tol ;i n t coun t = 0 ;i n t j ;
/ / Raw− i n p u t f rom use r .p r i n t f ( ” P l e a s e g i ve t h e number o f i n t e r v a l s : ” ) ;s c a n f ( ”%d” , &N ) ;
87
/∗Memory a l l o c a t i o n∗ /u = c a l l o c ( N+1 , s i z e o f ( double ) ) ;
dx = ( x r i g h t − x l e f t ) / ( double ) N;
u [ 0 ] = 0 . 0 ;u [N] = 0 . 0 ;
f o r ( i = 1 ; i < N; i ++ ){
u [ i ] = 0 . 0 ;}
whi le ( maxd i f f > Tol ){
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < N; i ++ ){
pre = u [ i ] ;/∗ Newton− l i k e Gauss−S e i d e l i t e r a t i o n f o r one node a t a t ime∗ /u [ i ] = ( u [ i ] − ( ( ( u [ i −1 ] / ( dx∗dx ) ) + ( u [ i ] −2/( dx∗dx ) )∗u [ i ] + ( u [ i + 1 ] / ( dx ∗dx ) )
− s i n ( M PI∗dx∗ i ) ∗ s i n ( M PI∗dx∗ i ) + M PI∗M PI∗ s i n ( M PI∗dx∗ i ) ) / (2∗u [ i ] −
2 / ( dx∗dx ) ) ) ) ;
d i f f = p re − u [ i ] ;
i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}coun t = coun t + 1 ;
}
f o r ( i = 0 ; i <= N; i ++ ){
p r i n t f ( ” %1.16 f\n” , u [N/ 2 ] ) ;}p r i n t f ( ” r uns : %d\n” , coun t ) ;/ / f r e e ( u ) ;re turn ( 0 ) ;
}
88
Appendix H
Code Of Non-linear Two-grid Solver
/∗ 1D boundary va lue problem .∗∗ Two−gr id , Non− l i n e a r∗∗ Tim Yang∗∗ 09 /03 /2010∗ /
# inc lude <s t d l i b . h># inc lude <s t d i o . h># inc lude <math . h>
# d e f i n e Tol 0 .000001# d e f i n e t 13107200# d e f i n e x l e f t 0 . 0# d e f i n e x r i g h t 1 .0
double s i n ( double x ) ;
i n t main ( i n t argc , char∗∗ argdv ){
/∗ number o f p o i n t s on coa rse and f i n e g r i d .∗ /i n t Nc , Nf ;
/∗Raw− i n p u t f rom use r .∗ // / p r i n t f ( ” P lease g i v e t h e l e f t and r i g h t domain v a l u e s :” ) ;/ / s c a n f ( ”%f%f ” , &x l e f t , &x r i g h t ) ;p r i n t f ( ” P l e a s e g i ve t h e number o f i n t e r v a l s ( c o a r s e g r i d ) ( f i n e g r i d ) : ” ) ;s c a n f ( ”%d%d” , &Nc , &Nf ) ;
double dxf ; /∗ d x f i s dx on f i n e g r i d .∗ /double dxc ; /∗ dxc i s dx on coa rse g r i d .∗ /double ∗us ; /∗ U∗ ar ray ∗ /
89
double ∗uc ; /∗ va lue o f u on coa rse g r i d∗ /double ∗ f c ; /∗ va lue o f m o d i f i e d RHS on coa rse g r i d∗ /double ∗ f f ;double ∗ec ; /∗ e r r o r s on coa rse g r i d .∗ /double ∗ e f ; /∗ e r r o r s on f i n e g r i d∗ /double ∗uf ; /∗ v a l u e s o f each p o i n t s on f i n e g r i d .∗ /double ∗ r c ; /∗ r e s i d u a l o f coa rse g r i d .∗ /double ∗ r f ; /∗ r e s i d u a l o f f i n e g r i d .∗ /double ∗Uold ; /∗ r e c o r d s as p r e v i o u s v a l u e s o f u f .∗ /i n t i , j ;double d i f f ;double maxd i f f = 2 .0 ∗ Tol ;i n t coun t = 0 ;i n t h ;
/∗Memory a l l o c a t i o n∗ /f f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;f c = c a l l o c ( Nc+1 , s i z e o f ( double ) ) ;uc = c a l l o c ( Nc+1 , s i z e o f ( double ) ) ;us = c a l l o c ( Nc+1 , s i z e o f ( double ) ) ;ec = c a l l o c ( Nc+1 , s i z e o f ( double ) ) ;u f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;r c = c a l l o c ( Nc+1 , s i z e o f ( double ) ) ;r f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;e f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;Uold= c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;
dx f = ( double ) ( x r i g h t − x l e f t ) / ( double ) Nf ;dxc = ( double ) ( x r i g h t − x l e f t ) / ( double ) Nc ;
u f [ 0 ] = 0 . 0 ;u f [ Nf ] = 0 . 0 ;/∗ i n i t i a l guess .∗ /f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = 0 . 5 ;}
f f [ 0 ] = 0 . 0 ;f f [ Nf ] = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
f f [ i ] = s i n ( M PI∗dxf ∗ i ) ∗ s i n ( M PI∗dxf ∗ i ) − M PI∗M PI∗ s i n ( M PI∗dxf ∗ i ) ;}/∗Two−g r i d loop∗ /whi le ( maxd i f f > Tol ){
f o r ( i = 0 ; i <= Nf ; i ++ ){
Uold [ i ] = u f [ i ] ;}/∗∗ 2 i t e r a t i o n s o f Gauss−S e i d e l on f i n e g r i d .∗ /
f o r ( j = 0 ; j < 2 ; j ++ ){
f o r ( i = 1 ; i < Nf ; i ++ )
90
{uf [ i ] = ( u f [ i ] − ( ( ( u f [ i −1 ] / ( dx f ∗dxf ) ) + ( u f [ i ] −2/( dx f∗dxf ) ) ∗uf [ i ] + ( u f [ i
+ 1 ] / ( dx f∗dxf ) ) − f f [ i ] ) / (2 ∗ uf [ i ] − 2 / ( dx f∗dxf ) ) ) ) ;}
}/∗∗ C a l c u l a t i n g r e s i d u a l o f f i n e g r i d .∗ /
f o r ( i = 1 ; i < Nf ; i ++ ){
r f [ i ] = f f [ i ] − ( ( u f [ i −1] −2∗uf [ i ] + u f [ i +1 ] ) / ( dx f ∗dxf ) + u f [ i ] ∗ uf [ i ] ) ;}/∗∗ R e s t r i c t r e s i d u a l t o coa rse g r i d .∗ /
f o r ( i = 1 ; i < Nc ; i ++ ){
h = 2 ∗ i ;r c [ i ] = ( 0 .25 ∗ r f [ h−1] ) + ( 0 .5 ∗ r f [ h ] ) + ( 0 .25 ∗ r f [ h +1] ) ;
}/∗∗ R e s t r i c t u t o coa rse g r i d .∗ /f o r ( i = 1 ; i < Nc ; i ++ )
{h = 2 ∗ i ;us [ i ] = ( 0 .25 ∗ uf [ h−1] ) + ( 0 .5 ∗ uf [ h ] ) + ( 0 .25 ∗ uf [ h +1] ) ;
}/∗∗ So l ve f u l l problem on coa rse g r i d .∗ /
uc [ 0 ] = 0 . 0 ;uc [ Nc ] = 0 . 0 ;f o r ( i = 1 ; i < Nc ; i ++ ){
uc [ i ] = us [ i ] ;}
f c [ 0 ] = 0 . 0 ;f c [ Nc ] = 0 . 0 ;f o r ( i = 1 ; i < Nc ; i ++ ){
f c [ i ] = r c [ i ] + ( us [ i −1] − 2∗us [ i ] + us [ i +1 ] ) / ( dxc∗dxc ) + us [ i ]∗ us [ i ] ;}
f o r ( j = 0 ; j < t ; j ++ ){
f o r ( i = 1 ; i < Nc ; i ++ ){
uc [ i ] = uc [ i ] − ( ( ( uc [ i −1] − 2∗uc [ i ] + uc [ i +1 ] ) / ( dxc∗dxc ) + uc [ i ]∗ uc [ i ] −
f c [ i ] ) / ( 2 ∗ uc [ i ] −2/( dxc∗dxc ) ) ) ;}
}/∗∗ C a l c u l a t i n g t h e e r r o r .∗ /
f o r ( i = 0 ; i <= Nc ; i ++ )
91
{ec [ i ] = uc [ i ] − us [ i ] ;
}/∗∗ I n t e r p o l a t i n g e r r o r t o f i n e g r i d .∗ /
h = 1 ;f o r ( i = 1 ; i < Nf ; i += 2 ){
e f [ i ] = ( ec [ h−1] + ec [ h ] ) ∗ 0 . 5 ;h = h + 1 ;
}h = 1 ;f o r ( i = 2 ; i < Nf ; i += 2 ){
e f [ i ] = ec [ h ] ;h = h + 1 ;
}/∗∗ Update v a l u e s on f i n e g r i d .∗ /
f o r ( i = 1 ; i < Nf ; i ++){
uf [ i ] = u f [ i ] + e f [ i ] ;}/∗∗ 2 i t e r a t i o n s o f Gauss−S e i d e l on f i n e g r i d .∗ /
f o r ( j = 0 ; j < 2 ; j ++ ){
f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = ( u f [ i ] − ( ( ( u f [ i −1 ] / ( dx f ∗dxf ) ) + ( u f [ i ] −2/( dx f∗dxf ) ) ∗uf [ i ] + ( u f [ i+ 1 ] / ( dx f∗dxf ) ) − f f [ i ] ) / (2 ∗ uf [ i ] − 2 / ( dx f∗dxf ) ) ) ) ;
}}/∗∗ Stopp ing c r i t e r i a .∗ /
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
d i f f = Uold [ i ] − uf [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}coun t = coun t + 1 ;
}
p r i n t f ( ” Va lues on f i n e g r i d .\ n” ) ;f o r ( i = 0 ; i <=Nf ; i ++ )
92
{p r i n t f ( ” %1.16 f\n” , u f [ Nf / 2 ] ) ;
}p r i n t f ( ” Mul t i −g r i d runs : %d t imes\n” , coun t ) ;/∗memory a l l o c a t i o n∗ // / f r e e ( f f ) ;/ / f r e e ( f c ) ;/ / f r e e ( uc ) ;/ / f r e e ( us ) ;/ / f r e e ( ec ) ;/ / f r e e ( u f ) ;/ / f r e e ( rc ) ;/ / f r e e ( r f ) ;/ / f r e e ( e f ) ;/ / f r e e ( Uold ) ;re turn ( 0 ) ;
}
93
Appendix I
Code Of Non-linear Multigrid-grid Solver
/∗ 1D boundary va lue problem .∗∗ Mul t i −Grid Ve rs i on Non− l i n e a r∗∗ Tim Yang∗∗ 12 /03 /2010∗ /
# inc lude <s t d l i b . h># inc lude <s t d i o . h># inc lude <math . h>
# d e f i n e Tol1 0.000001# d e f i n e Tol2 0.00000001# d e f i n e x l e f t 0 . 0# d e f i n e x r i g h t 1 .0
double s i n ( double x ) ;
/∗ S i n g l e Guass−S e i d e l i t e r a t i o n f o r a l l i n t e r n a l nodes .∗ /double GaussSe ide l (double ∗ f f , double ∗uf , i n t Nf ){
i n t i ;double dxf ;
dx f = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = ( u f [ i ] − ( ( ( u f [ i −1 ] / ( dx f ∗dxf ) ) + ( u f [ i ] −2/( dx f∗dxf ) ) ∗uf [ i ] + ( u f [ i + 1 ] / (dx f ∗dxf ) ) − f f [ i ] ) / (2 ∗ uf [ i ] − 2 / ( dx f∗dxf ) ) ) ) ;
}re turn ( 0 ) ;
94
}
/∗ C a l c u l a t i n g t h e r e s i d u a l f o r a l l i n t e r n a l nodes .∗ /double r e s i d u a l ( double ∗ r f , double ∗uf , double ∗ f f , i n t Nf ){
i n t i ;double dxf ;
dx f = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
f o r ( i = 1 ; i < Nf ; i ++ ){
r f [ i ] = f f [ i ] − ( ( u f [ i −1] −2∗uf [ i ] + u f [ i +1 ] ) / ( dx f ∗dxf ) + u f [ i ] ∗ uf [ i ] ) ;}re turn ( 0 ) ;
}
/∗ R e s t r i c t i n g t h e r e s i d u a l t o a coa rse g r i d .∗ /double r e s t r i c t i o n R ( double ∗ rc , double ∗ r f , i n t Nf ){
i n t i ;i n t h ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
h = 2 ∗ i ;r c [ i ] = ( 0 .25 ∗ r f [ h−1] ) + ( 0 .5 ∗ r f [ h ] ) + ( 0 .25 ∗ r f [ h +1] ) ;
}re turn ( 0 ) ;
}
/∗ R e s t r i c t i n g t h e va lue o f u t o a coa rse g r i d .∗ /double r e s t r i c t i o n U ( double ∗us , double ∗uf , i n t Nf ){
i n t i ;i n t h ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
h = 2 ∗ i ;us [ i ] = ( 0 .25 ∗ uf [ h−1] ) + ( 0 .5 ∗ uf [ h ] ) + ( 0 .25 ∗ uf [ h +1] ) ;
}re turn ( 0 ) ;
}
/∗ C a l c u l a t i n g t h e m o d i f i e d RHS on a coa rse g r i d∗ /double modif iedRHS ( double ∗us , double ∗ fc , double ∗ rc , i n t Nf ){
i n t i ;double dxc ;
dxc = ( double ) ( x r i g h t− x l e f t ) / ( double ) ( Nf / 2 ) ;
f c [ 0 ] = 0 . 0 ;f c [ Nf / 2 ] = 0 . 0 ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
f c [ i ] = ( r c [ i ] + ( us [ i −1] − 2∗us [ i ] + us [ i +1 ] ) / ( dxc∗dxc ) + us [ i ]∗ us [ i ] ) ;}
95
re turn ( 0 ) ;}
/∗ Coarse g r i d s o l v e r∗ /double E r r o r E q u a t i o n ( double ∗uc , double ∗ fc , i n t Nc ){
i n t i ;double dxc ;double d i f f ;double pre ;double maxd i f f = 2 ∗ Tol2 ;
dxc = ( double ) ( x r i g h t − x l e f t ) / ( double ) Nc ;
whi le ( maxd i f f > Tol2 ){
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nc ; i ++ ){
pre = uc [ i ] ;uc [ i ] = uc [ i ] − ( ( ( uc [ i −1] − 2∗uc [ i ] + uc [ i +1 ] ) / ( dxc∗dxc ) + uc [ i ]∗ uc [ i ] − f c [
i ] ) / ( 2 ∗ uc [ i ] −2/( dxc∗dxc ) ) ) ;d i f f = p re − uc [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}}re turn ( 0 ) ;
}
/∗ C a l c u l a t i n g t h e e r r o r o f each i n t e r n a l nodes on a coa rse g r i d∗ /double e r r o r ( double ∗ec , double ∗uc , double ∗us , i n t Nf ){
i n t i ;
f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
ec [ i ] = uc [ i ] − us [ i ] ;}re turn ( 0 ) ;
}
/∗ I n t e r p o l a t i n g t h e e r r o r back t o a f i n e g r i d∗ /double i n t e r p o l a t i o n ( double ∗ef , double ∗ec , i n t Nf ){
i n t i ;i n t h ;h = 1 ;f o r ( i = 1 ; i < Nf ; i += 2 ){
96
e f [ i ] = ( ec [ h−1] + ec [ h ] ) ∗ 0 . 5 ;h = h + 1 ;
}h = 1 ;f o r ( i = 2 ; i < Nf ; i += 2 ){
e f [ i ] = ec [ h ] ;h = h + 1 ;
}re turn ( 0 ) ;
}
/∗ Updat ing t h e c u r r e n t va l ue o f u w i t h t h e i n t e r p o l a t e d e r r o r∗ /double c o r r e c t i o n ( double ∗uf , double ∗ef , i n t Nf ){
i n t i ;f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = u f [ i ] + e f [ i ] ;}re turn ( 0 ) ;
}
/∗One V−c y c l e∗ /double Mul t iG r i d ( double ∗ f f , double ∗uf , i n t Nf , i n t Nc ){
i n t i ;double ∗ r f ; /∗ r e s i d u a l o f f i n e g r i d .∗ /double ∗ r c ; /∗ r e s i d u a l o f coa rse g r i d .∗ /double ∗uc ; /∗ va lue o f u on coa rse g r i d∗ /double ∗ e f ; /∗ e r r o r s on f i n e g r i d∗ /double ∗us ; /∗ va lue o f r e s t r i c t e d u from f i n e g r i d∗ /double ∗ f c ; /∗ va lue o f RHS on coa rse g r i d∗ /double ∗ec ; /∗ e r r o r term on coa rse g r i d∗ /
f o r ( i = 0 ; i < 2 ; i ++ ){
GaussSe ide l ( f f , uf , Nf ) ;}
r f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;r e s i d u a l ( r f , uf , f f , Nf ) ;
r c = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;r e s t r i c t i o n R ( rc , r f , Nf ) ;
us = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;r e s t r i c t i o n U ( us , uf , Nf ) ;
f c = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;modif iedRHS ( us , fc , rc , Nf ) ;
uc = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;uc [ 0 ] = 0 . 0 ;uc [ Nf / 2 ] = 0 . 0 ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
97
uc [ i ] = us [ i ] ;}
i f ( Nf < Nc ∗ 2 + 1 ){
E r r o r E q u a t i o n ( uc , fc , Nc ) ;}e l s e{
/∗ Recurs ion approach∗ /Mul t iG r i d ( fc , uc , Nf / 2 , Nc ) ;
}
ec = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;e r r o r ( ec , uc , us , Nf ) ;
e f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;i n t e r p o l a t i o n ( ef , ec , Nf ) ;
c o r r e c t i o n ( uf , ef , Nf ) ;
f o r ( i = 0 ; i < 2 ; i ++ ){
GaussSe ide l ( f f , uf , Nf ) ;}/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( us ) ;/ / f r e e ( f c ) ;/ / f r e e ( r f ) ;/ / f r e e ( rc ) ;/ / f r e e ( e f ) ;/ / f r e e ( ec ) ;/ / f r e e ( uc ) ;re turn ( 0 ) ;
}
i n t main ( i n t argc , char∗∗ argdv ){
i n t Nc , N, Nf ;i n t i , coun t =0;double ∗uf ; /∗ v a l u e s o f each p o i n t s on f i n e g r i d .∗ /double ∗Uold ; /∗ r e c o r d s as p r e v i o u s v a l u e s o f u f .∗ /double ∗ f f ;double dxf ;double d i f f ;double maxd i f f = 2 .0 ∗ Tol1 ;
/∗Raw− i n p u t f rom use r .∗ /p r i n t f ( ” P l e a s e g i ve t h e number o f i n t e r v a l s on c o a r s e g r i d : ” ) ;s c a n f ( ”%d” , &Nc ) ;p r i n t f ( ” P l e a s e g i ve t h e number o f g r i d s : ” ) ;s c a n f ( ”%d” , &N ) ;
Nf = Nc ;f o r ( i = 1 ; i < N; i ++ ){
Nf = Nf ∗ 2 ;
98
}
dxf = ( double ) ( x r i g h t − x l e f t ) / ( f l o a t ) Nf ;
/∗Memory a l l o c a t i o n∗ /uf = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;Uold= c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;f f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;
f f [ 0 ] = 0 . 0 ;f f [ Nf ] = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
f f [ i ] = s i n ( M PI∗dxf ∗ i ) ∗ s i n ( M PI∗dxf ∗ i ) − M PI∗M PI∗ s i n ( M PI∗dxf ∗ i ) ;}/∗ i n i t i a l guess∗ /uf [ 0 ] = 0 . 0 ;u f [ Nf ] = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = 0 . 5 ;}/∗ M u l t i g r i d loop ∗ /whi le ( maxd i f f > Tol1 ){
f o r ( i = 0 ; i <= Nf ; i ++ ){
Uold [ i ] = u f [ i ] ;}
Mul t iG r i d ( f f , uf , Nf , Nc ) ;/∗∗ Stopp ing c r i t e r i a .∗ /
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
d i f f = Uold [ i ] − uf [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}coun t = coun t + 1 ;
}
p r i n t f ( ” Va lues on f i n e g r i d .\ n” ) ;f o r ( i = 0 ; i <=Nf ; i ++ ){
p r i n t f ( ” %1.16 f %1.16 f\n” , u f [ Nf / 2 ] , u f [ Nf /2] − s i n ( M PI∗dxf ∗ ( Nf / 2 ) ) ) ;}p r i n t f ( ” \nThe f i n e s t g r i d i s : %d\n” , Nf ) ;p r i n t f ( ” Mul t i −g r i d runs : %d t imes\n” , coun t ) ;
99
/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( u f ) ;/ / f r e e ( Uold ) ;/ / f r e e ( f f ) ;re turn ( 0 ) ;
}
100
Appendix J
Code Of Non-linear Multigrid Solver OnA More General Problem
/∗ 1D boundary va lue problem .∗∗ Mul t i −Grid Ve rs i on Non− l i n e a r power P∗∗ Tim Yang∗∗ 21 /03 /2010∗ /
# inc lude <s t d l i b . h># inc lude <s t d i o . h># inc lude <math . h>
# d e f i n e Tol1 0.000001# d e f i n e Tol2 0.00000001/∗ pre−d e f i n e t h e power number .∗ /# d e f i n e P 5# d e f i n e x l e f t 0 . 0# d e f i n e x r i g h t 1 .0
/∗ power f u n c t i o n from s tanda r d C l i b r a r y∗ /double pow ( double base , double exp ) ;
double s i n ( double x ) ;
/∗ S i n g l e Guass−S e i d e l i t e r a t i o n f o r a l l i n t e r n a l nodes .∗ /double GaussSe ide l (double ∗ f f , double ∗uf , i n t Nf ){
i n t i ;double dxf ;
101
dxf = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = ( u f [ i ] − ( ( ( u f [ i −1 ] / ( dx f ∗dxf ) ) + (−2∗uf [ i ] / ( dx f ∗dxf ) ) + ( u f [ i + 1 ] / ( dx f ∗dxf ) ) + pow ( u f [ i ] , P ) − f f [ i ] ) / ( P ∗uf [ i ] − 2 / ( dx f∗dxf ) ) ) ) ;
}re turn ( 0 ) ;
}
/∗ C a l c u l a t i n g t h e r e s i d u a l f o r a l l i n t e r n a l nodes .∗ /double r e s i d u a l ( double ∗ r f , double ∗uf , double ∗ f f , i n t Nf ){
i n t i ;double dxf ;
dx f = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
f o r ( i = 1 ; i < Nf ; i ++ ){
r f [ i ] = f f [ i ] − ( ( u f [ i −1] −2∗uf [ i ] + u f [ i +1 ] ) / ( dx f ∗dxf ) + pow ( u f [ i ] , P ) ) ;}re turn ( 0 ) ;
}
/∗ R e s t r i c t i n g t h e r e s i d u a l t o a coa rse g r i d .∗ /double r e s t r i c t i o n R ( double ∗ rc , double ∗ r f , i n t Nf ){
i n t i ;i n t h ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
h = 2 ∗ i ;r c [ i ] = ( 0 .25 ∗ r f [ h−1] ) + ( 0 .5 ∗ r f [ h ] ) + ( 0 .25 ∗ r f [ h +1] ) ;
}re turn ( 0 ) ;
}
/∗ R e s t r i c t i n g t h e va lue o f u t o a coa rse g r i d .∗ /double r e s t r i c t i o n U ( double ∗us , double ∗uf , i n t Nf ){
i n t i ;i n t h ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
h = 2 ∗ i ;us [ i ] = ( 0 .25 ∗ uf [ h−1] ) + ( 0 .5 ∗ uf [ h ] ) + ( 0 .25 ∗ uf [ h +1] ) ;
}re turn ( 0 ) ;
}
/∗ C a l c u l a t i n g t h e m o d i f i e d RHS on a coa rse g r i d∗ /double modif iedRHS ( double ∗us , double ∗ fc , double ∗ rc , i n t Nf ){
i n t i ;double dxc ;
102
dxc = ( double ) ( x r i g h t− x l e f t ) / ( double ) ( Nf / 2 ) ;
f c [ 0 ] = 0 . 0 ;f c [ Nf / 2 ] = 0 . 0 ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
f c [ i ] = ( r c [ i ] + ( us [ i −1] − 2∗us [ i ] + us [ i +1 ] ) / ( dxc∗dxc ) + pow ( us [ i ] , P ) ) ;}re turn ( 0 ) ;
}
/∗ Coarse g r i d s o l v e r∗ /double E r r o r E q u a t i o n ( double ∗uc , double ∗ fc , i n t Nc ){
i n t i ;double dxc ;double d i f f ;double pre ;double maxd i f f = 2 ∗ Tol2 ;
dxc = ( double ) ( x r i g h t − x l e f t ) / ( double ) Nc ;
whi le ( maxd i f f > Tol2 ){
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nc ; i ++ ){
pre = uc [ i ] ;uc [ i ] = uc [ i ] − ( ( ( uc [ i −1] − 2∗uc [ i ] + uc [ i +1 ] ) / ( dxc∗dxc ) + pow ( uc [ i ] , P ) − f c
[ i ] ) / ( P∗uc [ i ] −2/( dxc∗dxc ) ) ) ;d i f f = p re − uc [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}}re turn ( 0 ) ;
}
/∗ C a l c u l a t i n g t h e e r r o r o f each i n t e r n a l nodes on a coa rse g r i d∗ /double e r r o r ( double ∗ec , double ∗uc , double ∗us , i n t Nf ){
i n t i ;
f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
ec [ i ] = uc [ i ] − us [ i ] ;}re turn ( 0 ) ;
}
103
/∗ I n t e r p o l a t i n g t h e e r r o r back t o a f i n e g r i d∗ /double i n t e r p o l a t i o n ( double ∗ef , double ∗ec , i n t Nf ){
i n t i ;i n t h ;h = 1 ;f o r ( i = 1 ; i < Nf ; i += 2 ){
e f [ i ] = ( ec [ h−1] + ec [ h ] ) ∗ 0 . 5 ;h = h + 1 ;
}h = 1 ;f o r ( i = 2 ; i < Nf ; i += 2 ){
e f [ i ] = ec [ h ] ;h = h + 1 ;
}re turn ( 0 ) ;
}
/∗ Updat ing t h e c u r r e n t va l ue o f u w i t h t h e i n t e r p o l a t e d e r r o r∗ /double c o r r e c t i o n ( double ∗uf , double ∗ef , i n t Nf ){
i n t i ;f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = u f [ i ] + e f [ i ] ;}re turn ( 0 ) ;
}
/∗One V−c y c l e∗ /double Mul t iG r i d ( double ∗ f f , double ∗uf , i n t Nf , i n t Nc ){
i n t i ;double ∗ r f ; /∗ r e s i d u a l o f f i n e g r i d .∗ /double ∗ r c ; /∗ r e s i d u a l o f coa rse g r i d .∗ /double ∗uc ; /∗ va lue o f u on coa rse g r i d∗ /double ∗ e f ; /∗ e r r o r s on f i n e g r i d∗ /double ∗us ; /∗ va lue o f r e s t r i c t e d u on coa rse g r i d∗ /double ∗ f c ; /∗ va lue o f m o d i f i e d RHS on coa rse g r i d∗ /double ∗ec ; /∗ e r r o r term on coa rse g r i d∗ /
f o r ( i = 0 ; i < 2 ; i ++ ){
GaussSe ide l ( f f , uf , Nf ) ;}
r f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;r e s i d u a l ( r f , uf , f f , Nf ) ;
r c = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;r e s t r i c t i o n R ( rc , r f , Nf ) ;
us = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;r e s t r i c t i o n U ( us , uf , Nf ) ;
104
f c = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;modif iedRHS ( us , fc , rc , Nf ) ;
uc = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;uc [ 0 ] = 0 . 0 ;uc [ Nf / 2 ] = 0 . 0 ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
uc [ i ] = us [ i ] ;}
i f ( Nf < Nc ∗ 2 + 1 ){
E r r o r E q u a t i o n ( uc , fc , Nc ) ;}e l s e{
/∗ Recurs ion approach∗ /Mul t iG r i d ( fc , uc , Nf / 2 , Nc ) ;
}
ec = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;e r r o r ( ec , uc , us , Nf ) ;
e f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;i n t e r p o l a t i o n ( ef , ec , Nf ) ;
c o r r e c t i o n ( uf , ef , Nf ) ;
f o r ( i = 0 ; i < 2 ; i ++ ){
GaussSe ide l ( f f , uf , Nf ) ;}/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( us ) ;/ / f r e e ( f c ) ;/ / f r e e ( r f ) ;/ / f r e e ( rc ) ;/ / f r e e ( e f ) ;/ / f r e e ( ec ) ;/ / f r e e ( uc ) ;re turn ( 0 ) ;
}
/∗ ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗ ∗ /i n t main ( i n t argc , char∗∗ argdv ){
i n t Nc , N, Nf ;i n t i , coun t =0;double ∗uf ; /∗ v a l u e s o f each p o i n t s on f i n e g r i d .∗ /double ∗Uold ; /∗ r e c o r d s as p r e v i o u s v a l u e s o f u f .∗ /double ∗ f f ;double dxf ;double d i f f ;double maxd i f f = 2 .0 ∗ Tol1 ;
/∗Raw− i n p u t f rom use r .∗ /
105
p r i n t f ( ” P l e a s e g i ve t h e number o f i n t e r v a l s on c o a r s e g r i d : ” ) ;s c a n f ( ”%d” , &Nc ) ;p r i n t f ( ” P l e a s e g i ve t h e number o f g r i d s : ” ) ;s c a n f ( ”%d” , &N ) ;
Nf = Nc ;f o r ( i = 1 ; i < N; i ++ ){
Nf = Nf ∗ 2 ;}
dxf = ( double ) ( x r i g h t − x l e f t ) / ( f l o a t ) Nf ;/∗Memory a l l o c a t i o n∗ /uf = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;Uold= c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;f f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;
f f [ 0 ] = 0 . 0 ;f f [ Nf ] = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
f f [ i ] = pow ( s i n ( M PI∗dxf ∗ i ) ,P ) − M PI∗M PI∗ s i n ( M PI∗dxf ∗ i ) ;}/∗ i n i t i a l guess∗ /uf [ 0 ] = 0 . 0 ;u f [ Nf ] = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = 0 . 5 ;}/∗ M u l t i g r i d loop ∗ /whi le ( maxd i f f > Tol1 ){
f o r ( i = 0 ; i <= Nf ; i ++ ){
Uold [ i ] = u f [ i ] ;}
Mul t iG r i d ( f f , uf , Nf , Nc ) ;/∗∗ Stopp ing c r i t e r i a .∗ /
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
d i f f = Uold [ i ] − uf [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}coun t = coun t + 1 ;
}
106
p r i n t f ( ” \nValues on f i n e g r i d .\ n” ) ;f o r ( i = 0 ; i <=Nf ; i ++ ){
p r i n t f ( ” %1.16 f \n” , u f [ i ] ) ;}p r i n t f ( ” The f i n e s t g r i d i s : %d\n” , Nf ) ;p r i n t f ( ” Mul t i −g r i d runs : %d t imes\n\n” , coun t ) ;/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( u f ) ;/ / f r e e ( Uold ) ;/ / f r e e ( f f ) ;re turn ( 0 ) ;
}
107
Appendix K
Code Of FMG
/∗ 1D boundary va lue problem .∗∗ Mul t i −Grid Ve rs i on Non− l i n e a r∗∗ Tim Yang∗∗ 14 /03 /2010∗ /
# inc lude <s t d l i b . h># inc lude <s t d i o . h># inc lude <math . h>
# d e f i n e Tol1 0.000001# d e f i n e Tol2 0.00000001# d e f i n e x l e f t 0 . 0# d e f i n e x r i g h t 1 .0
double s i n ( double x ) ;
/∗ S i n g l e Guass−S e i d e l i t e r a t i o n f o r a l l i n t e r n a l nodes .∗ /double GaussSe ide l (double ∗ f f , double ∗uf , i n t Nf ){
i n t i ;double dxf ;
dx f = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = ( u f [ i ] − ( ( ( u f [ i −1 ] / ( dx f ∗dxf ) ) + ( u f [ i ] −2/( dx f∗dxf ) ) ∗uf [ i ] + ( u f [ i + 1 ] / (dx f ∗dxf ) ) − f f [ i ] ) / (2 ∗ uf [ i ] − 2 / ( dx f∗dxf ) ) ) ) ;
}re turn ( 0 ) ;
108
}
/∗ C a l c u l a t i n g t h e r e s i d u a l f o r a l l i n t e r n a l nodes .∗ /double r e s i d u a l ( double ∗ r f , double ∗uf , double ∗ f f , i n t Nf ){
i n t i ;double dxf ;
dx f = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
f o r ( i = 1 ; i < Nf ; i ++ ){
r f [ i ] = f f [ i ] − ( ( u f [ i −1] −2∗uf [ i ] + u f [ i +1 ] ) / ( dx f ∗dxf ) + u f [ i ] ∗ uf [ i ] ) ;}re turn ( 0 ) ;
}
/∗ R e s t r i c t i n g t h e r e s i d u a l t o a coa rse g r i d .∗ /double r e s t r i c t i o n R ( double ∗ rc , double ∗ r f , i n t Nf ){
i n t i ;i n t h ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
h = 2 ∗ i ;r c [ i ] = ( 0 .25 ∗ r f [ h−1] ) + ( 0 .5 ∗ r f [ h ] ) + ( 0 .25 ∗ r f [ h +1] ) ;
}re turn ( 0 ) ;
}
/∗ R e s t r i c t i n g t h e va lue o f u t o a coa rse g r i d .∗ /double r e s t r i c t i o n U ( double ∗us , double ∗uf , i n t Nf ){
i n t i ;i n t h ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
h = 2 ∗ i ;us [ i ] = ( 0 .25 ∗ uf [ h−1] ) + ( 0 .5 ∗ uf [ h ] ) + ( 0 .25 ∗ uf [ h +1] ) ;
}re turn ( 0 ) ;
}
/∗ C a l c u l a t i n g t h e m o d i f i e d RHS on a coa rse g r i d∗ /double modif iedRHS ( double ∗us , double ∗ fc , double ∗ rc , i n t Nf ){
i n t i ;double dxc ;
dxc = ( double ) ( x r i g h t− x l e f t ) / ( double ) ( Nf / 2 ) ;
f c [ 0 ] = 0 . 0 ;f c [ Nf / 2 ] = 0 . 0 ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
f c [ i ] = ( r c [ i ] + ( us [ i −1] − 2∗us [ i ] + us [ i +1 ] ) / ( dxc∗dxc ) + us [ i ]∗ us [ i ] ) ;}
109
re turn ( 0 ) ;}
/∗ Coarse g r i d s o l v e r∗ /double E r r o r E q u a t i o n ( double ∗uc , double ∗ fc , i n t Nc ){
i n t i ;double dxc ;double d i f f ;double pre ;double maxd i f f = 2 ∗ Tol2 ;
dxc = ( double ) ( x r i g h t − x l e f t ) / ( double ) Nc ;
whi le ( maxd i f f > Tol2 ){
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nc ; i ++ ){
pre = uc [ i ] ;uc [ i ] = uc [ i ] − ( ( ( uc [ i −1] − 2∗uc [ i ] + uc [ i +1 ] ) / ( dxc∗dxc ) + uc [ i ]∗ uc [ i ] − f c [
i ] ) / ( 2 ∗ uc [ i ] −2/( dxc∗dxc ) ) ) ;d i f f = p re − uc [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}}re turn ( 0 ) ;
}
/∗ C a l c u l a t i n g t h e e r r o r o f each i n t e r n a l nodes on a coa rse g r i d∗ /double e r r o r ( double ∗ec , double ∗uc , double ∗us , i n t Nf ){
i n t i ;
f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
ec [ i ] = uc [ i ] − us [ i ] ;}re turn ( 0 ) ;
}
/∗ I n t e r p o l a t i n g t h e e r r o r back t o a f i n e g r i d∗ /double i n t e r p o l a t i o n ( double ∗ef , double ∗ec , i n t Nf ){
i n t i ;i n t h ;h = 1 ;f o r ( i = 1 ; i < Nf ; i += 2 ){
110
e f [ i ] = ( ec [ h−1] + ec [ h ] ) ∗ 0 . 5 ;h = h + 1 ;
}h = 1 ;f o r ( i = 2 ; i < Nf ; i += 2 ){
e f [ i ] = ec [ h ] ;h = h + 1 ;
}re turn ( 0 ) ;
}
/∗ Updat ing t h e c u r r e n t va l ue o f u w i t h t h e i n t e r p o l a t e d e r r o r∗ /double c o r r e c t i o n ( double ∗uf , double ∗ef , i n t Nf ){
i n t i ;f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = u f [ i ] + e f [ i ] ;}re turn ( 0 ) ;
}
/∗One V−c y c l e∗ /double Mul t iG r i d ( double ∗ f f , double ∗uf , i n t Nf , i n t Nc ){
i n t i ;double ∗ r f ; /∗ r e s i d u a l o f f i n e g r i d .∗ /double ∗ r c ; /∗ r e s i d u a l o f coa rse g r i d .∗ /double ∗uc ; /∗ va lue o f u on coa rse g r i d∗ /double ∗ e f ; /∗ e r r o r s on f i n e g r i d∗ /double ∗us ; /∗ va lue o f r e s t r i c t e d u from f i n e g r i d∗ /double ∗ f c ; /∗ va lue o f RHS on coa rse g r i d∗ /double ∗ec ; /∗ e r r o r term on coa rse g r i d∗ /
f o r ( i = 0 ; i < 2 ; i ++ ){
GaussSe ide l ( f f , uf , Nf ) ;}
r f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;r e s i d u a l ( r f , uf , f f , Nf ) ;
r c = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;r e s t r i c t i o n R ( rc , r f , Nf ) ;
us = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;r e s t r i c t i o n U ( us , uf , Nf ) ;
f c = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;modif iedRHS ( us , fc , rc , Nf ) ;
uc = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;uc [ 0 ] = 0 . 0 ;uc [ Nf / 2 ] = 0 . 0 ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
111
uc [ i ] = us [ i ] ;}
i f ( Nf < Nc ∗ 2 + 1 ){
E r r o r E q u a t i o n ( uc , fc , Nc ) ;}e l s e{
/∗ Recurs ion approach∗ /Mul t iG r i d ( fc , uc , Nf / 2 , Nc ) ;
}
ec = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;e r r o r ( ec , uc , us , Nf ) ;
e f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;i n t e r p o l a t i o n ( ef , ec , Nf ) ;
c o r r e c t i o n ( uf , ef , Nf ) ;
f o r ( i = 0 ; i < 2 ; i ++ ){
GaussSe ide l ( f f , uf , Nf ) ;}/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( us ) ;/ / f r e e ( f c ) ;/ / f r e e ( r f ) ;/ / f r e e ( rc ) ;/ / f r e e ( e f ) ;/ / f r e e ( ec ) ;/ / f r e e ( uc ) ;re turn ( 0 ) ;
}
/∗ ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗ ∗ /i n t main ( i n t argc , char∗∗ argdv ){
i n t Nc , N, Nf ;i n t i , coun t =0;i n t h =2;i n t j ;double ∗uf ; /∗ v a l u e s o f each p o i n t s on f i n e g r i d .∗ /double ∗Uold ; /∗ r e c o r d s as p r e v i o u s v a l u e s o f u f .∗ /double ∗ f f ;double ∗uc ;double dxf , dxc , dx ;double d i f f ;double maxd i f f = 2 .0 ∗ Tol1 ;
/∗Raw− i n p u t f rom use r .∗ /p r i n t f ( ” P l e a s e g i ve t h e number o f i n t e r v a l s on c o a r s e g r i d : ” ) ;s c a n f ( ”%d” , &Nc ) ;p r i n t f ( ” P l e a s e g i ve t h e number o f g r i d s : ” ) ;s c a n f ( ”%d” , &N ) ;
112
Nf = Nc ;f o r ( i = 1 ; i < N; i ++ ){
Nf = Nf ∗ 2 ;}
dxf = ( double ) ( x r i g h t − x l e f t ) / ( f l o a t ) Nf ;dxc = ( double ) ( x r i g h t − x l e f t ) / ( f l o a t ) Nc ;
uc = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;u f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;Uold= c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;f f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;
f f [ 0 ] = 0 . 0 ;f f [ Nc ] = 0 . 0 ;f o r ( i = 1 ; i < Nc ; i ++ ){
f f [ i ] = s i n ( M PI∗dxc∗ i ) ∗ s i n ( M PI∗dxc∗ i ) − M PI∗M PI∗ s i n ( M PI∗dxc∗ i ) ;}uc [ 0 ] = 0 . 0 ;uc [ Nf ] = 0 . 0 ;f o r ( i = 1 ; i < Nc ; i ++ ){
uc [ i ] = 0 . 0 ;}E r r o r E q u a t i o n ( uc , f f , Nc ) ;i n t e r p o l a t i o n ( uf , uc , Nc∗2 ) ;
/∗FMG on t h e g r i d s between t h e c o a r s e s t and f i n e s t g r i d s . dx i s re−c a l c u l a t e eacht ime s i n c e t h e number o f p o i n t s on t h e g r i d i s chang ing .∗ /
f o r ( j = 2 ; j < N; j ++ ){
dx = ( double ) ( x r i g h t − x l e f t ) / ( f l o a t ) ( Nc∗h ) ;/∗Nc∗h f i r s t l y i n d i c a t e s t h e g r i d j u s t a f t e r t h e c o a r s e s t g r i d . And i s
m u l t i p l i e d by two each t ime∗ /f o r ( i = 1 ; i < Nc∗h ; i ++ ){
f f [ i ] = s i n ( M PI∗dx∗ i ) ∗ s i n ( M PI∗dx∗ i ) − M PI∗M PI∗ s i n ( M PI∗dx∗ i ) ;}
Mul t iG r i d ( f f , uf , Nc∗h , Nc ) ;/∗ As Nc∗h i n d i c a t e s t h e number o f p o i n t s on t h e c u r r e n t g r id , t h e Nc∗h∗2
d e t e r m i n e t h e number o f p o i n t s on t h e n e x t g r i d .∗ /i n t e r p o l a t i o n ( uc , uf , Nc∗h∗2 ) ;f o r ( i = 1 ; i < Nc∗h∗2 ; i ++ ){
uf [ i ] = uc [ i ] ;}h = h ∗ 2 ;
}/∗ The RHS o f t h e f i n e s t g r i d i s c a l c u l a t e here .∗ /f o r ( i = 1 ; i < Nf ; i ++ ){
f f [ i ] = s i n ( M PI∗dxf ∗ i ) ∗ s i n ( M PI∗dxf ∗ i ) − M PI∗M PI∗ s i n ( M PI∗dxf ∗ i ) ;}/∗ M u l t i g r i d loop ∗ /whi le ( maxd i f f > Tol1 )
113
{f o r ( i = 0 ; i <= Nf ; i ++ ){
Uold [ i ] = u f [ i ] ;}
Mul t iG r i d ( f f , uf , Nf , Nc ) ;/∗∗ Stopp ing c r i t e r i a .∗ /
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
d i f f = Uold [ i ] − uf [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}coun t = coun t + 1 ;
}
p r i n t f ( ” Va lues on f i n e g r i d .\ n” ) ;f o r ( i = 0 ; i <=Nf ; i ++ ){
p r i n t f ( ” %1.16 f %1.16 f\n” , u f [ Nf / 2 ] , u f [ Nf /2] − s i n ( M PI∗dxf ∗ ( Nf / 2 ) ) ) ;}p r i n t f ( ” \nThe f i n e s t g r i d i s : %d\n” , Nf ) ;p r i n t f ( ” Mul t i −g r i d runs : %d t imes\n” , coun t ) ;/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( u f ) ;/ / f r e e ( uc ) ;/ / f r e e ( Uold ) ;/ / f r e e ( f f ) ;re turn ( 0 ) ;
}
114
Appendix L
Code Of W-cycle
/∗ 1D boundary va lue problem .∗∗ Mul t i −Grid Ve rs i on Non− l i n e a r W−c y c l e∗∗ Tim Yang∗∗ 17 /03 /2010∗ /
# inc lude <s t d l i b . h># inc lude <s t d i o . h># inc lude <math . h>
# d e f i n e Tol1 0.000001# d e f i n e Tol2 0.00000001# d e f i n e x l e f t 0 . 0# d e f i n e x r i g h t 1 .0
double s i n ( double x ) ;
/∗ S i n g l e Guass−S e i d e l i t e r a t i o n f o r a l l i n t e r n a l nodes .∗ /double GaussSe ide l (double ∗ f f , double ∗uf , i n t Nf ){
i n t i ;double dxf ;
dx f = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = ( u f [ i ] − ( ( ( u f [ i −1 ] / ( dx f ∗dxf ) ) + ( u f [ i ] −2/( dx f∗dxf ) ) ∗uf [ i ] + ( u f [ i + 1 ] / (dx f ∗dxf ) ) − f f [ i ] ) / (2 ∗ uf [ i ] − 2 / ( dx f∗dxf ) ) ) ) ;
}re turn ( 0 ) ;
115
}
/∗ C a l c u l a t i n g t h e r e s i d u a l f o r a l l i n t e r n a l nodes .∗ /double r e s i d u a l ( double ∗ r f , double ∗uf , double ∗ f f , i n t Nf ){
i n t i ;double dxf ;
dx f = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
f o r ( i = 1 ; i < Nf ; i ++ ){
r f [ i ] = f f [ i ] − ( ( u f [ i −1] −2∗uf [ i ] + u f [ i +1 ] ) / ( dx f ∗dxf ) + u f [ i ] ∗ uf [ i ] ) ;}re turn ( 0 ) ;
}
/∗ R e s t r i c t i n g t h e r e s i d u a l t o a coa rse g r i d .∗ /double r e s t r i c t i o n R ( double ∗ rc , double ∗ r f , i n t Nf ){
i n t i ;i n t h ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
h = 2 ∗ i ;r c [ i ] = ( 0 .25 ∗ r f [ h−1] ) + ( 0 .5 ∗ r f [ h ] ) + ( 0 .25 ∗ r f [ h +1] ) ;
}re turn ( 0 ) ;
}
/∗ R e s t r i c t i n g t h e va lue o f u t o a coa rse g r i d .∗ /double r e s t r i c t i o n U ( double ∗us , double ∗uf , i n t Nf ){
i n t i ;i n t h ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
h = 2 ∗ i ;us [ i ] = ( 0 .25 ∗ uf [ h−1] ) + ( 0 .5 ∗ uf [ h ] ) + ( 0 .25 ∗ uf [ h +1] ) ;
}re turn ( 0 ) ;
}
/∗ C a l c u l a t i n g t h e m o d i f i e d RHS on a coa rse g r i d∗ /double modif iedRHS ( double ∗us , double ∗ fc , double ∗ rc , i n t Nf ){
i n t i ;double dxc ;
dxc = ( double ) ( x r i g h t− x l e f t ) / ( double ) ( Nf / 2 ) ;
f c [ 0 ] = 0 . 0 ;f c [ Nf / 2 ] = 0 . 0 ;f c [ Nf / 2+1 ] = 0 . 0 ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
f c [ i ] = r c [ i ] + ( us [ i −1] − 2∗us [ i ] + us [ i +1 ] ) / ( dxc∗dxc ) + us [ i ]∗ us [ i ] ;
116
}re turn ( 0 ) ;
}
/∗ Coarse g r i d s o l v e r∗ /double E r r o r E q u a t i o n ( double ∗uc , double ∗ fc , i n t Nc ){
i n t i ;/ / i n t j ;double dxc ;double d i f f ;double pre ;double maxd i f f = 2 ∗ Tol2 ;
dxc = ( double ) ( x r i g h t − x l e f t ) / ( double ) Nc ;
whi le ( maxd i f f > Tol2 )/ / f o r ( j = 0 ; j < 3276800; j++ ){
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nc ; i ++ ){
pre = uc [ i ] ;uc [ i ] = uc [ i ] − ( ( ( uc [ i −1] − 2∗uc [ i ] + uc [ i +1 ] ) / ( dxc∗dxc ) + uc [ i ]∗ uc [ i ] − f c [
i ] ) / ( 2 ∗ uc [ i ] −2/( dxc∗dxc ) ) ) ;d i f f = p re − uc [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}}re turn ( 0 ) ;
}
/∗ C a l c u l a t i n g t h e e r r o r o f each i n t e r n a l nodes on a coa rse g r i d∗ /double e r r o r ( double ∗ec , double ∗uc , double ∗us , i n t Nf ){
i n t i ;
f o r ( i = 0 ; i <= Nf / 2 ; i ++ ){
ec [ i ] = uc [ i ] − us [ i ] ;}re turn ( 0 ) ;
}
/∗ I n t e r p o l a t i n g t h e e r r o r back t o a f i n e g r i d∗ /double i n t e r p o l a t i o n ( double ∗ef , double ∗ec , i n t Nf ){
i n t i ;i n t h ;
117
h = 1 ;f o r ( i = 1 ; i < Nf ; i += 2 ){
e f [ i ] = ( ec [ h−1] + ec [ h ] ) ∗ 0 . 5 ;h = h + 1 ;
}h = 1 ;f o r ( i = 2 ; i < Nf ; i += 2 ){
e f [ i ] = ec [ h ] ;h = h + 1 ;
}re turn ( 0 ) ;
}
/∗ Updat ing t h e c u r r e n t va l ue o f u w i t h t h e i n t e r p o l a t e d e r r o r∗ /double c o r r e c t i o n ( double ∗uf , double ∗ef , i n t Nf ){
i n t i ;f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = u f [ i ] + e f [ i ] ;}re turn ( 0 ) ;
}
/∗One W−c y c l e∗ /double Mul t iG r i d ( double ∗ f f , double ∗uf , i n t Nf , i n t Nc , i n t NNf ){
i n t i ;i n t w = 0 ;double ∗ r f ; /∗ r e s i d u a l o f f i n e g r i d .∗ /double ∗ r c ; /∗ r e s i d u a l o f coa rse g r i d .∗ /double ∗uc ;double ∗ e f ; /∗ e r r o r s on f i n e g r i d∗ /double ∗us ;double ∗ f c ;double ∗ec ;
f o r ( i = 0 ; i < 2 ; i ++ ){
GaussSe ide l ( f f , uf , Nf ) ;}
r f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;r e s i d u a l ( r f , uf , f f , Nf ) ;
r c = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;r e s t r i c t i o n R ( rc , r f , Nf ) ;
us = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;r e s t r i c t i o n U ( us , uf , Nf ) ;
uc = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;uc [ 0 ] = 0 . 0 ;uc [ Nf / 2 ] = 0 . 0 ;f o r ( i = 1 ; i < Nf / 2 ; i ++ )
118
{uc [ i ] = us [ i ] ;
}
f c = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;modif iedRHS ( us , fc , rc , Nf ) ;
/∗ t h e number o f a d d i t i o n a l coa rse g r i d c o r r e c t i o n i s de te rm ined from t h i s wv a r i a b l e∗ /
whi le ( w < 1 ){
i f ( Nf < Nc ∗ 2 + 1 ){
E r r o r E q u a t i o n ( uc , fc , Nc ) ;w = w + 1 ;
}e l s e{
Mul t iG r i d ( fc , uc , Nf / 2 , Nc , NNf ) ;i f ( Nf == NNf ){
w = 100;}e l s e{
w = w + 1 ;}
}
ec = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;e r r o r ( ec , uc , us , Nf ) ;
e f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;i n t e r p o l a t i o n ( ef , ec , Nf ) ;
c o r r e c t i o n ( uf , ef , Nf ) ;
f o r ( i = 0 ; i < 2 ; i ++ ){
GaussSe ide l ( f f , uf , Nf ) ;}
r e s i d u a l ( r f , uf , f f , Nf ) ;r e s t r i c t i o n R ( rc , r f , Nf ) ;r e s t r i c t i o n U ( us , uf , Nf ) ;uc [ 0 ] = 0 . 0 ;uc [ Nf / 2 ] = 0 . 0 ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
uc [ i ] = us [ i ] ;}modif iedRHS ( us , fc , rc , Nf ) ;
}/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( us ) ;/ / f r e e ( f c ) ;/ / f r e e ( r f ) ;
119
/ / f r e e ( rc ) ;/ / f r e e ( e f ) ;/ / f r e e ( ec ) ;/ / f r e e ( uc ) ;re turn ( 0 ) ;
}
i n t main ( i n t argc , char∗∗ argdv ){
i n t Nc , N, Nf ;i n t i , coun t =0;double ∗uf ; /∗ v a l u e s o f each p o i n t s on f i n e g r i d .∗ /double ∗Uold ; /∗ r e c o r d s as p r e v i o u s v a l u e s o f u f .∗ /double ∗ f f ;double dxf ;double d i f f ;double maxd i f f = 2 .0 ∗ Tol1 ;
/∗Raw− i n p u t f rom use r .∗ /p r i n t f ( ” P l e a s e g i ve t h e number o f i n t e r v a l s on c o a r s e g r i d : ” ) ;s c a n f ( ”%d” , &Nc ) ;p r i n t f ( ” P l e a s e g i ve t h e number o f g r i d s : ” ) ;s c a n f ( ”%d” , &N ) ;
Nf = Nc ;f o r ( i = 1 ; i < N; i ++ ){
Nf = Nf ∗ 2 ;}
dxf = ( double ) ( x r i g h t − x l e f t ) / ( f l o a t ) Nf ;
u f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;Uold= c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;f f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;
f f [ 0 ] = 0 . 0 ;f f [ Nf ] = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
f f [ i ] = s i n ( M PI∗dxf ∗ i ) ∗ s i n ( M PI∗dxf ∗ i ) − M PI∗M PI∗ s i n ( M PI∗dxf ∗ i ) ;}/∗ i n i t i a l guess∗ /uf [ 0 ] = 0 . 0 ;u f [ Nf ] = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = 0 . 5 ;}/∗ M u l t i g r i d loop ∗ /whi le ( maxd i f f > Tol1 ){
f o r ( i = 0 ; i <= Nf ; i ++ ){
Uold [ i ] = u f [ i ] ;}
120
Mul t iG r i d ( f f , uf , Nf , Nc , Nf ) ;/∗∗ Stopp ing c r i t e r i a .∗ /
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
d i f f = Uold [ i ] − uf [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}coun t = coun t + 1 ;
}
p r i n t f ( ” Va lues on f i n e g r i d .\ n” ) ;f o r ( i = 0 ; i <=Nf ; i ++ ){
p r i n t f ( ” %1.16 f %1.16 f\n” , u f [ Nf / 2 ] , u f [ Nf /2] − s i n ( M PI∗dxf ∗ ( Nf / 2 ) ) ) ;}p r i n t f ( ” \nThe f i n e s t g r i d i s : %d\n” , Nf ) ;p r i n t f ( ” Mul t i −g r i d runs : %d t imes\n” , coun t ) ;/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( u f ) ;/ / f r e e ( Uold ) ;/ / f r e e ( f f ) ;re turn ( 0 ) ;
}
121
Appendix M
Code Of W-cycle On A More GeneralProblem
/∗ 1D boundary va lue problem .∗∗ Mul t i −Grid Ve rs i on Non− l i n e a r power P∗∗ Tim Yang∗∗ 21 /03 /2010∗ /
# inc lude <s t d l i b . h># inc lude <s t d i o . h># inc lude <math . h>
# d e f i n e Tol1 0.000001# d e f i n e Tol2 0.00000001# d e f i n e P 5# d e f i n e x l e f t 0 . 0# d e f i n e x r i g h t 1 .0
double pow ( double base , double exp ) ;
double s i n ( double x ) ;
/∗ S i n g l e Guass−S e i d e l i t e r a t i o n f o r a l l i n t e r n a l nodes .∗ /double GaussSe ide l (double ∗ f f , double ∗uf , i n t Nf ){
i n t i ;double dxf ;
dx f = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
122
f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = ( u f [ i ] − ( ( ( u f [ i −1 ] / ( dx f ∗dxf ) ) + (−2∗uf [ i ] / ( dx f ∗dxf ) ) + ( u f [ i + 1 ] / ( dx f ∗dxf ) ) + pow ( u f [ i ] , P ) − f f [ i ] ) / ( P ∗uf [ i ] − 2 / ( dx f∗dxf ) ) ) ) ;
}re turn ( 0 ) ;
}
/∗ C a l c u l a t i n g t h e r e s i d u a l f o r a l l i n t e r n a l nodes .∗ /double r e s i d u a l ( double ∗ r f , double ∗uf , double ∗ f f , i n t Nf ){
i n t i ;double dxf ;
dx f = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
f o r ( i = 1 ; i < Nf ; i ++ ){
r f [ i ] = f f [ i ] − ( ( u f [ i −1] −2∗uf [ i ] + u f [ i +1 ] ) / ( dx f ∗dxf ) + pow ( u f [ i ] , P ) ) ;}re turn ( 0 ) ;
}
/∗ R e s t r i c t i n g t h e r e s i d u a l t o a coa rse g r i d .∗ /double r e s t r i c t i o n R ( double ∗ rc , double ∗ r f , i n t Nf ){
i n t i ;i n t h ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
h = 2 ∗ i ;r c [ i ] = ( 0 .25 ∗ r f [ h−1] ) + ( 0 .5 ∗ r f [ h ] ) + ( 0 .25 ∗ r f [ h +1] ) ;
}re turn ( 0 ) ;
}
/∗ R e s t r i c t i n g t h e va lue o f u t o a coa rse g r i d .∗ /double r e s t r i c t i o n U ( double ∗us , double ∗uf , i n t Nf ){
i n t i ;i n t h ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
h = 2 ∗ i ;us [ i ] = ( 0 .25 ∗ uf [ h−1] ) + ( 0 .5 ∗ uf [ h ] ) + ( 0 .25 ∗ uf [ h +1] ) ;
}re turn ( 0 ) ;
}
/∗ C a l c u l a t i n g t h e m o d i f i e d RHS on a coa rse g r i d∗ /double modif iedRHS ( double ∗us , double ∗ fc , double ∗ rc , i n t Nf ){
i n t i ;double dxc ;
dxc = ( double ) ( x r i g h t− x l e f t ) / ( double ) ( Nf / 2 ) ;
123
f c [ 0 ] = 0 . 0 ;f c [ Nf / 2 ] = 0 . 0 ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
f c [ i ] = ( r c [ i ] + ( us [ i −1] − 2∗us [ i ] + us [ i +1 ] ) / ( dxc∗dxc ) + pow ( us [ i ] , P ) ) ;}re turn ( 0 ) ;
}
/∗ Coarse g r i d s o l v e r∗ /double E r r o r E q u a t i o n ( double ∗uc , double ∗ fc , i n t Nc ){
i n t i ;/ / i n t j ;double dxc ;double d i f f ;double pre ;double maxd i f f = 2 ∗ Tol2 ;
dxc = ( double ) ( x r i g h t − x l e f t ) / ( double ) Nc ;
whi le ( maxd i f f > Tol2 )/ / f o r ( j = 0 ; j < 13107200; j++ ){
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nc ; i ++ ){
pre = uc [ i ] ;uc [ i ] = uc [ i ] − ( ( ( uc [ i −1] − 2∗uc [ i ] + uc [ i +1 ] ) / ( dxc∗dxc ) + pow ( uc [ i ] , P ) − f c
[ i ] ) / ( P∗uc [ i ] −2/( dxc∗dxc ) ) ) ;d i f f = p re − uc [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}}re turn ( 0 ) ;
}
/∗ C a l c u l a t i n g t h e e r r o r o f each i n t e r n a l nodes on a coa rse g r i d∗ /double e r r o r ( double ∗ec , double ∗uc , double ∗us , i n t Nf ){
i n t i ;
f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
ec [ i ] = uc [ i ] − us [ i ] ;}re turn ( 0 ) ;
}
124
/∗ I n t e r p o l a t i n g t h e e r r o r back t o a f i n e g r i d∗ /double i n t e r p o l a t i o n ( double ∗ef , double ∗ec , i n t Nf ){
i n t i ;i n t h ;h = 1 ;f o r ( i = 1 ; i < Nf ; i += 2 ){
e f [ i ] = ( ec [ h−1] + ec [ h ] ) ∗ 0 . 5 ;h = h + 1 ;
}h = 1 ;f o r ( i = 2 ; i < Nf ; i += 2 ){
e f [ i ] = ec [ h ] ;h = h + 1 ;
}re turn ( 0 ) ;
}
/∗ Updat ing t h e c u r r e n t va l ue o f u w i t h t h e i n t e r p o l a t e d e r r o r∗ /double c o r r e c t i o n ( double ∗uf , double ∗ef , i n t Nf ){
i n t i ;f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = u f [ i ] + e f [ i ] ;}re turn ( 0 ) ;
}
/∗One W−c y c l e∗ /double Mul t iG r i d ( double ∗ f f , double ∗uf , i n t Nf , i n t Nc , i n t NNf ){
i n t i ;i n t w = 0 ;double ∗ r f ; /∗ r e s i d u a l o f f i n e g r i d .∗ /double ∗ r c ; /∗ r e s i d u a l o f coa rse g r i d .∗ /double ∗uc ;double ∗ e f ; /∗ e r r o r s on f i n e g r i d∗ /double ∗us ;double ∗ f c ;double ∗ec ;
f o r ( i = 0 ; i < 2 ; i ++ ){
GaussSe ide l ( f f , uf , Nf ) ;}
r f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;r e s i d u a l ( r f , uf , f f , Nf ) ;
r c = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;r e s t r i c t i o n R ( rc , r f , Nf ) ;
us = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;r e s t r i c t i o n U ( us , uf , Nf ) ;
125
uc = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;uc [ 0 ] = 0 . 0 ;uc [ Nf / 2 ] = 0 . 0 ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
uc [ i ] = us [ i ] ;}
f c = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;modif iedRHS ( us , fc , rc , Nf ) ;
/∗ t h e number o f a d d i t i o n a l coa rse g r i d c o r r e c t i o n i s de te rm ined from t h i s wv a r i a b l e∗ /
whi le ( w < 2 ){
i f ( Nf < Nc ∗ 2 + 1 ){
E r r o r E q u a t i o n ( uc , fc , Nc ) ;w = w + 1 ;
}e l s e{
Mul t iG r i d ( fc , uc , Nf / 2 , Nc , NNf ) ;i f ( Nf == NNf ){
w = 100;}e l s e{
w = w + 1 ;}
}
ec = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;e r r o r ( ec , uc , us , Nf ) ;
e f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;i n t e r p o l a t i o n ( ef , ec , Nf ) ;
c o r r e c t i o n ( uf , ef , Nf ) ;
f o r ( i = 0 ; i < 2 ; i ++ ){
GaussSe ide l ( f f , uf , Nf ) ;}
r e s i d u a l ( r f , uf , f f , Nf ) ;r e s t r i c t i o n R ( rc , r f , Nf ) ;r e s t r i c t i o n U ( us , uf , Nf ) ;uc [ 0 ] = 0 . 0 ;uc [ Nf / 2 ] = 0 . 0 ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
uc [ i ] = us [ i ] ;}modif iedRHS ( us , fc , rc , Nf ) ;
126
}/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( us ) ;/ / f r e e ( f c ) ;/ / f r e e ( r f ) ;/ / f r e e ( rc ) ;/ / f r e e ( e f ) ;/ / f r e e ( ec ) ;/ / f r e e ( uc ) ;re turn ( 0 ) ;
}
/∗ ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗ ∗ /i n t main ( i n t argc , char∗∗ argdv ){
i n t Nc , N, Nf ;i n t i , coun t =0;double ∗uf ; /∗ v a l u e s o f each p o i n t s on f i n e g r i d .∗ /double ∗Uold ; /∗ r e c o r d s as p r e v i o u s v a l u e s o f u f .∗ /double ∗ f f ;double dxf ;double d i f f ;double maxd i f f = 2 .0 ∗ Tol1 ;
/∗Raw− i n p u t f rom use r .∗ /p r i n t f ( ” P l e a s e g i ve t h e number o f i n t e r v a l s on c o a r s e g r i d : ” ) ;s c a n f ( ”%d” , &Nc ) ;p r i n t f ( ” P l e a s e g i ve t h e number o f g r i d s : ” ) ;s c a n f ( ”%d” , &N ) ;
Nf = Nc ;f o r ( i = 1 ; i < N; i ++ ){
Nf = Nf ∗ 2 ;}
dxf = ( double ) ( x r i g h t − x l e f t ) / ( f l o a t ) Nf ;
u f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;Uold= c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;f f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;
f f [ 0 ] = 0 . 0 ;f f [ Nf ] = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
f f [ i ] = pow ( s i n ( M PI∗dxf ∗ i ) ,P ) − M PI∗M PI∗ s i n ( M PI∗dxf ∗ i ) ;}/∗ i n i t i a l guess∗ /uf [ 0 ] = 0 . 0 ;u f [ Nf ] = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
uf [ i ] = 0 . 5 ;}/∗ M u l t i g r i d loop ∗ /whi le ( maxd i f f > Tol1 )
127
{f o r ( i = 0 ; i <= Nf ; i ++ ){
Uold [ i ] = u f [ i ] ;}
Mul t iG r i d ( f f , uf , Nf , Nc , Nf ) ;/∗∗ Stopp ing c r i t e r i a .∗ /
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
d i f f = Uold [ i ] − uf [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}coun t = coun t + 1 ;
}
p r i n t f ( ” \nValues on f i n e g r i d .\ n” ) ;f o r ( i = 0 ; i <=Nf ; i ++ ){
p r i n t f ( ” %1.16 f %1.16 f\n” , u f [ Nf / 2 ] , u f [ Nf /2] − s i n ( M PI∗dxf ∗ ( Nf / 2 ) ) ) ;}p r i n t f ( ” The f i n e s t g r i d i s : %d\n” , Nf ) ;p r i n t f ( ” Mul t i −g r i d runs : %d t imes\n\n” , coun t ) ;/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( u f ) ;/ / f r e e ( Uold ) ;/ / f r e e ( f f ) ;re turn ( 0 ) ;
}
128
Appendix N
Code Of Adaptive Multigrid Solver
/∗ 1D boundary va lue problem .∗∗ MLAT Mul t i−Grid Ve rs i on∗∗ Tim Yang∗∗ 08 /04 /2010∗ /
# inc lude <s t d l i b . h># inc lude <s t d i o . h># inc lude <math . h>
/∗ MyLib . h has t h e s e f u n c t i o n s o f non− l i n e a r m u l t i g r i d , t h e y are used when we app lyFMG b e f o r e t h e MLAT runs .∗ /
# inc lude ”MyLib . h ”
# d e f i n e Tol1 0.000001# d e f i n e Tol2 0.00000001# d e f i n e x l e f t 0 . 0# d e f i n e x r i g h t 1 .0
double pow ( double base , double exp ) ;
/∗One i t e r a t i o n o f Gauss−S e i d e l . l e f t [M] and r i g h t [M] i n d i c a t e t h e boundary p o i n t s ,t h e r e f o r e i n t e r n a l s are between l e f t [M]+1 and r i g h t [M]−1. M i n d i c a t e s t h e
c u r r e n t l e v e l o f g r i d .∗ /double M GaussSeide l (double ∗ f f , double ∗uf , i n t Nf , i n t ∗ l e f t , i n t ∗ r i g h t , i n t M
){
i n t i ;double dxf ;
dx f = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
129
f o r ( i = l e f t [M] + 1 ; i < r i g h t [M] ; i ++ ){
uf [ i ] = ( u f [ i ] − ( ( ( u f [ i −1 ] / ( dx f ∗dxf ) ) + ( u f [ i ] −2/( dx f∗dxf ) ) ∗uf [ i ] + ( u f [ i + 1 ] / (dx f ∗dxf ) ) − f f [ i ] ) / (2 ∗ uf [ i ] − 2 / ( dx f∗dxf ) ) ) ) ;
}re turn ( 0 ) ;
}
/∗ C a l c u l a t e t h e r e s i d u a l o f a l l p o i n t s on t h e f i n e g r i d∗ /double M r e s i d u a l ( double ∗ r f , double ∗uf , double ∗ f f , i n t Nf ){
i n t i ;double dxf ;
dx f = ( double ) ( x r i g h t− x l e f t ) / ( double ) Nf ;
f o r ( i = 1 ; i < Nf ; i ++ ){
r f [ i ] = f f [ i ] − ( ( u f [ i −1] −2∗uf [ i ] + u f [ i +1 ] ) / ( dx f ∗dxf ) + u f [ i ] ∗ uf [ i ] ) ;}re turn ( 0 ) ;
}/∗ R e s t r i c t t h e r e s i d u a l o f t h e f i n e g r i d t o t h e coa rse g r i d . Fora l l p o i n t s .∗ /double M r e s t r i c t i o n R ( double ∗ rc , double ∗ r f , i n t Nf ){
i n t i ;i n t h ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
h = 2 ∗ i ;r c [ i ] = ( 0 .25 ∗ r f [ h−1] ) + ( 0 .5 ∗ r f [ h ] ) + ( 0 .25 ∗ r f [ h +1] ) ;
}re turn ( 0 ) ;
}
/∗ R e s t r i c t t h e u f t o u ( bar ) , on l y app ly t h e f u l l w e i g h t i n g on t he i n t e r n a l p o i n t s ,f o r o the rs , s i m p l y t a k e t h e c o r r e s p o n d i n g v a l u e s .∗ /
double M r e s t r i c t i o n U ( double ∗us , double ∗uf , i n t Nf , i n t ∗ l e f t , i n t ∗ r i g h t , i n t∗ i n t e r l e f t , i n t ∗ i n t e r r i g h t , i n t M )
{i n t i ;i n t h ;f o r ( i = i n t e r l e f t [M−1]+1; i < i n t e r r i g h t [M−1]; i ++ ){
h = 2 ∗ i ;us [ i ] = ( 0 .25 ∗ uf [ h−1] ) + ( 0 .5 ∗ uf [ h ] ) + ( 0 .25 ∗ uf [ h +1] ) ;
}f o r ( i = 1 ; i <= i n t e r l e f t [M−1]; i ++ ){
us [ i ] = u f [ i ∗2 ] ;}f o r ( i = i n t e r r i g h t [M−1]; i < Nf / 2 ; i ++ ){
us [ i ] = u f [ i ∗2 ] ;}re turn ( 0 ) ;
130
}
/∗ C a l c u l a t e t h e m o d i f i e d r i g h t hand s i d e on l y f o r i n t e r n a l p o in t s , o t h e r s t a k eo r i g i n a l r i g h t hand s i d e .∗ /
double M modifiedRHS ( double ∗us , double ∗ fc , double ∗ rc , i n t Nf , i n t ∗ l e f t , i n t ∗r i g h t , i n t ∗ i n t e r l e f t , i n t ∗ i n t e r r i g h t , i n t M )
{i n t i ;double dxc ;
dxc = ( double ) ( x r i g h t− x l e f t ) / ( double ) ( Nf / 2 ) ;
f c [ 0 ] = 0 . 0 ;f c [ Nf / 2 ] = 0 . 0 ;f o r ( i = i n t e r l e f t [M−1]+1; i < i n t e r r i g h t [M−1]; i ++ ){
f c [ i ] = ( r c [ i ] + ( us [ i −1] − 2∗us [ i ] + us [ i +1 ] ) / ( dxc∗dxc ) + us [ i ]∗ us [ i ] ) ;}f o r ( i = 1 ; i <= i n t e r l e f t [M−1]; i ++ ){
f c [ i ] = s i n ( M PI∗dxc∗ i ) ∗ s i n ( M PI∗dxc∗ i ) − M PI∗M PI∗ s i n ( M PI∗dxc∗ i ) ;}f o r ( i = i n t e r r i g h t [M−1]; i < Nf / 2 ; i ++ ){
f c [ i ] = s i n ( M PI∗dxc∗ i ) ∗ s i n ( M PI∗dxc∗ i ) − M PI∗M PI∗ s i n ( M PI∗dxc∗ i ) ;}re turn ( 0 ) ;
}
/∗ So l ve t h e problem on t h e c o a r s e s t g r i d . For a l l p o i n t s .∗ /double M Erro rEqua t ion ( double ∗uc , double ∗ fc , i n t Nc ){
i n t i ;double dxc ;double d i f f ;double pre ;double maxd i f f = 2 ∗ Tol2 ;
dxc = ( double ) ( x r i g h t − x l e f t ) / ( double ) Nc ;
whi le ( maxd i f f > Tol2 ){
maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nc ; i ++ ){
pre = uc [ i ] ;uc [ i ] = uc [ i ] − ( ( ( uc [ i −1] − 2∗uc [ i ] + uc [ i +1 ] ) / ( dxc∗dxc ) + uc [ i ]∗ uc [ i ] − f c [
i ] ) / (2 ∗ uc [ i ] −2/( dxc∗dxc ) ) ) ;d i f f = p re − uc [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
131
}}re turn ( 0 ) ;
}
/∗ C a l c u l a t e t h e e r r o r s f o r bo th boundary and i n t e r n a l p o i n t s .∗ /double M erro r ( double ∗ec , double ∗uc , double ∗us , i n t Nf , i n t ∗ l e f t , i n t ∗ r i g h t ,
i n t M ){
i n t i ;
f o r ( i = l e f t [M−1]; i <= r i g h t [M−1]; i ++ ){
ec [ i ] = uc [ i ] − us [ i ] ;}re turn ( 0 ) ;
}
/∗ I n t e r p o l a t e t h e e r r o r t o f i n e r g r i d . For a l l p o i n t s .∗ /double M i n t e r p o l a t i o n ( double ∗ef , double ∗ec , i n t Nf ){
i n t i ;i n t h ;h = 1 ;f o r ( i = 1 ; i < Nf ; i += 2 ){
e f [ i ] = ( ec [ h−1] + ec [ h ] ) ∗ 0 . 5 ;h = h + 1 ;
}h = 1 ;f o r ( i = 2 ; i < Nf ; i += 2 ){
e f [ i ] = ec [ h ] ;h = h + 1 ;
}re turn ( 0 ) ;
}
/∗ Update t h e v a l u e s f o r bo th boundary and i n t e r n a l p o i n t s .∗ /double M c o r r e c t i o n ( double ∗uf , double ∗ef , i n t Nf , i n t ∗ l e f t , i n t ∗ r i g h t , i n t M
){
i n t i ;f o r ( i = l e f t [M] ; i <= r i g h t [M] ; i ++ ){
uf [ i ] = u f [ i ] + e f [ i ] ;}re turn ( 0 ) ;
}
/∗MLAT M u l t i g r i d . ∗ /double M Mult iGr id ( double ∗ f f , double ∗uf , i n t Nf , i n t Nc , i n t ∗ l e f t , i n t ∗ r i g h t ,
i n t ∗ i n t e r l e f t , i n t ∗ i n t e r r i g h t , i n t M ){
i n t i ;double ∗ r f ; /∗ r e s i d u a l o f f i n e g r i d .∗ /double ∗ r c ; /∗ r e s i d u a l o f coa rse g r i d .∗ /
132
double ∗uc ;double ∗ e f ; /∗ e r r o r s on f i n e g r i d∗ /double ∗us ;double ∗ f c ;double ∗ec ;
f o r ( i = 0 ; i < 2 ; i ++ ){
M GaussSeide l ( f f , uf , Nf , l e f t , r i g h t , M ) ;}
r f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;M r e s i d u a l ( r f , uf , f f , Nf ) ;
r c = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;M r e s t r i c t i o n R ( rc , r f , Nf ) ;
us = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;M r e s t r i c t i o n U ( us , uf , Nf , l e f t , r i g h t , i n t e r l e f t , i n t e r r i gh t , M ) ;
f c = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;M modifiedRHS ( us , fc , rc , Nf , l e f t , r i g h t , i n t e r l e f t , i n t e r ri g h t , M ) ;
uc = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;uc [ 0 ] = 0 . 0 ;uc [ Nf / 2 ] = 0 . 0 ;f o r ( i = 1 ; i < Nf / 2 ; i ++ ){
uc [ i ] = us [ i ] ;}
i f ( Nf < Nc ∗ 2 + 1 ){
M Erro rEqua t ion ( uc , fc , Nc ) ;}e l s e{
M Mult iGr id ( fc , uc , Nf / 2 , Nc , l e f t , r i g h t , i n t e r l e f t , i n t e r ri g h t , M−1 ) ;}
ec = c a l l o c ( Nf /2+1 , s i z e o f ( double ) ) ;M er ro r ( ec , uc , us , Nf , l e f t , r i g h t , M ) ;
e f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;M i n t e r p o l a t i o n ( ef , ec , Nf ) ;
M c o r r e c t i o n ( uf , ef , Nf , l e f t , r i g h t , M ) ;
f o r ( i = 0 ; i < 2 ; i ++ ){
M GaussSeide l ( f f , uf , Nf , l e f t , r i g h t , M ) ;}/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( us ) ;/ / f r e e ( f c ) ;/ / f r e e ( r f ) ;/ / f r e e ( rc ) ;
133
/ / f r e e ( e f ) ;/ / f r e e ( ec ) ;/ / f r e e ( uc ) ;re turn ( 0 ) ;
}
i n t main ( i n t argc , char∗∗ argdv ){
i n t i , j , M;i n t h = 2 ;i n t Nc , Nf , N;double ∗uf ;double ∗Uold ;double ∗ f f ;double ∗uc ;i n t ∗ l e f t , ∗ r i g h t ;double dxf , dxc , dx ;i n t ∗ i n t e r l e f t , ∗ i n t e r r i g h t ;double d i f f ;double maxd i f f = Tol1∗2 ;i n t coun t = 0 ;
/∗ Raw input f rom use r .∗ /p r i n t f ( ” P l e a s e g i ve t h e number o f i n t e r v a l s on c o a r s e g r i d : ” ) ;s c a n f ( ”%d” , &Nc ) ;p r i n t f ( ” P l e a s e g i ve t h e number o f g r i d s : ” ) ;s c a n f ( ”%d” , &N ) ;
/∗ C a l c u l a t e t h e number o f p o i n t s on t h e f i n e s t g r i d .∗ /Nf = Nc ;f o r ( i = 1 ; i < N; i ++ ){
Nf = Nf ∗ 2 ;}
/∗ Determine M, s i n c e l e f t and r i g h t a r r a y s s t a r t f rom zero , M, as an i n t , i s t h el e v e l number mines one .∗ /
M = N−1;
/∗ Determine d x f and dxc , which d x f i s t h e d i f f e r e n c e between two p o i n t s on t h ef i n e s t g r id , dxc i s t h e d i f f e r e n c e on t h e c o a r s e s t g r i d .∗ /
dxf = ( double ) ( x r i g h t − x l e f t ) / ( f l o a t ) Nf ;dxc = ( double ) ( x r i g h t − x l e f t ) / ( f l o a t ) Nc ;
/∗ A l l o c a t e t h e memory , t h e r e cou ld be a cause o f e r ro r , s i n c e i nMyLib . h , t h enormal non− l i n e a r m u l t i g r i d uses t h e same name o f t h o s e ar rays , and t h e yareno t ab le t o f r e e i n e i t h e r t h i s f i l e or i n MyLib . h∗ /
uc = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;u f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;Uold= c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;f f = c a l l o c ( Nf +1 , s i z e o f ( double ) ) ;l e f t = c a l l o c ( N , s i z e o f ( i n t ) ) ;r i g h t = c a l l o c ( N , s i z e o f ( i n t ) ) ;i n t e r l e f t = c a l l o c ( N, s i z e o f ( i n t ) ) ;i n t e r r i g h t = c a l l o c ( N, s i z e o f ( i n t ) ) ;
/∗FMG on t h e c o a r s e s t g r i d∗ /
134
f f [ 0 ] = 0 . 0 ;f f [ Nc ] = 0 . 0 ;f o r ( i = 1 ; i < Nc ; i ++ ){
f f [ i ] = s i n ( M PI∗dxc∗ i ) ∗ s i n ( M PI∗dxc∗ i ) − M PI∗M PI∗ s i n ( M PI∗dxc∗ i ) ;}uc [ 0 ] = 0 . 0 ;uc [ Nf ] = 0 . 0 ;f o r ( i = 1 ; i < Nc ; i ++ ){
uc [ i ] = 0 . 0 ;}E r r o r E q u a t i o n ( uc , f f , Nc ) ;i n t e r p o l a t i o n ( uf , uc , Nc∗2 ) ;
/∗FMG on t h e g r i d s between t h e c o a r s e s t and f i n e s t g r i d s . dx i s re−c a l c u l a t e eacht ime s i n c e t h e number o f p o i n t s on t h e g r i d i s chang ing .∗ /
f o r ( j = 2 ; j < N; j ++ ){
dx = ( double ) ( x r i g h t − x l e f t ) / ( f l o a t ) ( Nc∗h ) ;/∗Nc∗h f i r s t l y i n d i c a t e s t h e g r i d j u s t a f t e r t h e c o a r s e s t g r i d . And i s
m u l t i p l i e d by two each t ime∗ /f o r ( i = 1 ; i < Nc∗h ; i ++ ){
f f [ i ] = s i n ( M PI∗dx∗ i ) ∗ s i n ( M PI∗dx∗ i ) − M PI∗M PI∗ s i n ( M PI∗dx∗ i ) ;}
Mul t iG r i d ( f f , uf , Nc∗h , Nc ) ;/∗ As Nc∗h i n d i c a t e s t h e number o f p o i n t s on t h e c u r r e n t g r id , t h e Nc∗h∗2
d e t e r m i n e t h e number o f p o i n t s on t h e n e x t g r i d .∗ /i n t e r p o l a t i o n ( uc , uf , Nc∗h∗2 ) ;f o r ( i = 1 ; i < Nc∗h∗2 ; i ++ ){
uf [ i ] = uc [ i ] ;}h = h ∗ 2 ;
}/∗ The RHS o f t h e f i n e s t g r i d i s c a l c u l a t e here .∗ /f o r ( i = 1 ; i < Nf ; i ++ ){
f f [ i ] = s i n ( M PI∗dxf ∗ i ) ∗ s i n ( M PI∗dxf ∗ i ) − M PI∗M PI∗ s i n ( M PI∗dxf ∗ i ) ;}
/∗ Determine t h e l e f t and r i g h t boundary p o i n t s on each gr id , ast h e example Iused be fo re , t h e c o a r s e s t boundary p o i n t s are 1 and 3 . So f o r th o s e f i n e rg r i ds , t h e i n d e x e s o f boundary p o i n t s i n c r e a s e by a f a c t o f two . ∗ /
l e f t [ 0 ] = 0 ;r i g h t [ 0 ] = Nc ;h = 2 ;f o r ( i = 1 ; i < M; i ++ ){
l e f t [ i ] = 0 ;r i g h t [ i ] = Nc ∗ h ;h = h ∗ 2 ;
}l e f t [M] = ( Nc ∗ pow ( 2 ,M) / 4 ) ;r i g h t [M] = 3 ∗ l e f t [M] ;
135
i n t e r l e f t [ 0 ] = 0 ;i n t e r r i g h t [ 0 ] = Nc ;f o r ( i = 1 ; i <= M; i ++ ){
i n t e r l e f t [ i ] = i n t e r l e f t [ i −1] ∗ 2 ;i n t e r r i g h t [ i ] = i n t e r r i g h t [ i−1] ∗ 2 ;
}/∗ Whi le loop s t a r t s here .∗ M Mul t iGr id has a number o f i n p u t s :∗ f f a re t h e RHS t h a t we c a l c u l a t e d b e f o r e ( r i g h t a f t e r t h e FMG).∗ u f are t h e v a l u e s on t h e f i n e s t g r i d . They are o b t a i n e d from i nt e r p o l a t i o n
a f t e r FMG on a c o a r s e r g r i d .∗ Nf i s t h e number o f p o i n t s o f c u r r e n t g r id , and w i l l be ha l ved in M Mu l t i g r i d
f u n c t i o n when each t ime i t runs .∗ l e f t a re t h e l e f t boundary p o i n t s on t h e s e f i n e g r i d s .∗ r i g h t are t h e r i g h t boundary p o i n t s on t h e s e f i n e g r i d s .∗ M i s t h e c u r r e n t l e v e l mines one , s i n c e bo th l e f t and r i g h t a r ra y s s t a r t f rom
ze ro . I t i s used t o i n d i c a t e which number we used i n l e f t and r ig h t a r r a y s .∗ /
whi le ( maxd i f f > Tol1 ){
f o r ( i = 0 ; i <= Nf ; i ++ ){
Uold [ i ] = u f [ i ] ;}M Mult iGr id ( f f , uf , Nf , Nc , l e f t , r i g h t , i n t e r l e f t , i n t e r r i gh t , M ) ;maxd i f f = 0 . 0 ;f o r ( i = 1 ; i < Nf ; i ++ ){
d i f f = Uold [ i ] − uf [ i ] ;i f ( d i f f < 0 ){
d i f f = − d i f f ;}i f ( maxd i f f < d i f f ){
maxd i f f = d i f f ;}
}coun t = coun t + 1 ;
}p r i n t f ( ” Va lues on f i n e g r i d .\ n” ) ;p r i n t f ( ” %1.16 f %1.16 f\n” , u f [ Nf / 2 ] , u f [ Nf /2] − s i n ( M PI∗dxf ∗ ( Nf / 2 ) ) ) ;p r i n t f ( ” \nThe f i n e s t g r i d i s : %d\n” , Nf ) ;p r i n t f ( ” Mul t i −g r i d runs : %d t imes\n” , coun t ) ;/∗ f r e e memory a l l o c a t i o n∗ // / f r e e ( u f ) ;/ / f r e e ( Uold ) ;/ / f r e e ( f f ) ;re turn ( 0 ) ;
}
136