dg-fem for pde’s lecture 2jan s hesthaven brown university [email protected] dg-fem for...
TRANSCRIPT
Jan S HesthavenBrown [email protected]
DG-FEM for PDE’sLecture 2
Univ Coruna/USC/Univ Vigo 2010
Monday, June 7, 2010
A brief overview of what’s to come
‣ Lecture 1: Introduction and DG in 1D
‣ Lecture 2: Implementation and numerical aspects
‣ Lecture 3: Insight through theory
‣ Lecture 4: Nonlinear problems
‣ Lecture 5: Mesh generation and 2D DG-FEM
‣ Lecture 6: Implementation and applications
‣ Lecture 7: Higher order/Global problems
‣ Lecture 8: 3D and advanced topics
Monday, June 7, 2010
Let’s recall
We already know a lot about the basic DG-FEM
‣ Stability is provided by carefully choosing the numerical flux.‣ Accuracy appear to be given by the local solution representation.‣ We can utilize major advances on monotone schemes to design fluxes.‣ The scheme generalizes with very few changes to very general systems of conservation laws.
At least in principle -- but how we do it in practice ?
Monday, June 7, 2010
Recall the basic formulation
1 Introduction 9
of the scheme and is also where one can introduce knowledge of the dynam-ics of the problem [e.g., by upwinding through the flux as in a finite volumemethod (FVM)].
To mimic the terminology of the finite element scheme, we write the twolocal elementwise schemes as
Mk dukh
dt! (Sk)T fk
h !Mkgkh = !f!(xk+1)!k(xk+1) + f!(xk)!k(xk)
and
Mk dukh
dt+ Skfk
h !Mkgkh = (fk
h (xk+1) ! f!(xk+1))!k(xk+1)
!(fkh (xk) ! f!(xk))!k(xk). (1.5)
Here we have the vectors of local unknowns, ukh, of fluxes, fk
h, and the forc-ing, gk
h, all given on the nodes in each element. Given the duplication ofunknowns at the element interfaces, each vector is 2K long. Furthermore, wehave !k(x) = [!k
1(x), . . . , !kNp
(x)]T and the local matrices
Mkij =
!
Dk!ki (x)!k
j (x) dx, Skij =
!
Dk!ki (x)
d!kj
dxdx.
While the structure of the DG-FEM is very similar to that of the finite ele-ment method (FEM), there are several fundamental di!erences. In particular,the mass matrix is local rather than global and thus can be inverted at verylittle cost, yielding a semidiscrete scheme that is explicit. Furthermore, bycarefully designing the numerical flux to reflect the underlying dynamics, onehas more flexibility than in the classic FEM to ensure stability for wave-dominated problems. Compared with the FVM, the DG-FEM overcomes thekey limitation on achieving high-order accuracy on general grids by enablingthis through the local element-based basis. This is all achieved while main-taining benefits such as local conservation and flexibility in the choice of thenumerical flux.
All of this, however, comes at a price – most notably through an increasein the total degrees of freedom as a direct result of the decoupling of theelements. For linear elements, this yields a doubling in the total number ofdegrees of freedom compared to the continuous FEM as discussed above. Forcertain problems, this clearly becomes an issue of significant importance; thatis, if we seek a steady solution and need to invert the discrete operator Sin Eq. (1.5) then the associated computational work scales directly with thesize of the matrix. Furthermore, for problems where the flexibility in theflux choices and the locality of scheme is of less importance (e.g., for ellipticproblems), the DG-FEM is not as e"cient as a better suited method like theFEM.
On the other hand, for the DG-FEM, the operator tends to be more sparsethan for a FEM of similar order, potentially leading to a faster solution pro-cedure. This overhead, caused by the doubling of degrees of freedom along
Weak:
Strong:
We introduce the local notation
8 1 Introduction
Here we have the vectors of local unknowns, uk, of fluxes, fk, and the forcing, gk, all given on thenodes in each element. Given the duplication of unknowns at the element interfaces, each vector is2K long. Furthermore, we have k(x) = [k
1(x), . . . , kNp
(x)]T and the local matrices
Mkij =
Dkki (x)k
j (x) dx, Skij =
Dkki (x)
dkj
dxdx.
While the structure of the DG-FEM is very similar to that of the finite element method (FEM), thereare several fundamental differences. In particular, the mass matrix is local rather than global andthus can be inverted at very little cost, yielding a semidiscrete scheme that is explicit. Furthermore,by carefully designing the numerical flux to reflect the underlying dynamics, one has more flexibilitythan in the classic FEM to ensure stability for wave-dominated problems. Compared with the FVM,the DG-FEM overcomes the key limitation on achieving high-order accuracy on general grids byenabling this through the local element-based basis. This is all achieved while maintaining benefitssuch as local conservation and flexibility in the choice of the numerical flux.
All of this, however, comes at a price – most notably through an increase in the total degreesof freedom as a direct result of the decoupling of the elements. For linear elements, this yields adoubling in the total number of degrees of freedom compared to the continuous FEM as discussedabove. For certain problems, this clearly becomes an issue of significant importance; that is, if weseek a steady solution and need to invert the discrete operator S in Eq.(1.5) then the associatedcomputational work scales directly with the size of the matrix. Furthermore, for problems wherethe flexibility in the flux choices and the locality of scheme is of less importance (e.g., for ellipticproblems), the DG-FEM is not as efficient as a better suited method like the FEM.
On the other hand, for the DG-FEM, the operator tends to be more sparse than for a FEMof similar order, potentially leading to a faster solution procedure. This overhead, caused by thedoubling of degrees of freedom along element interfaces, decreases as the order of elements increase,thus reducing the penalty of the doubling of the interface nodes somewhat. In Table 1.1 we alsosummarize the strengths and weaknesses of the DG-FEM. This simple – and simpleminded – com-parative discussion also highlights that time dependent wave-dominated problems emerge as themain candidates for problems where DG-FEM will be advantageous. As we will see shortly, this isalso where they have achieved most prominence and widespread use during the last decade.
1.1 A brief account of history
The discontinuous Galerkin finite element method (DG-FEM) appears to have been proposed firstin [269] as a way of solving the steady-state neutron transport equation
σu +∇ · (au) = f,
where σ is a real constant, a(x) is piecewise constant, and u is the unknown. The first analysis ofthis method was presented in [217], showing O(hN )-convergence on a general triangulated grid andoptimal convergence rate, O(hN+1), on a Cartesian grid of cell size h and with a local polynomialapproximation of order N . This result was later improved in [190] to O(hN+1/2)-convergence ongeneral grids. The optimality of this convergence rate was subsequently confirmed in [257] by a spe-cial example. These results assume smooth solutions, whereas the linear problems with nonsmoothsolutions were analyzed in [63, 225]. Techniques for postprocessing on Cartesian grids to enhance
to obtain the semi-discrete forms of DG-FEM
The basics of DG-FEM
1 Introduction 9
of the scheme and is also where one can introduce knowledge of the dynam-ics of the problem [e.g., by upwinding through the flux as in a finite volumemethod (FVM)].
To mimic the terminology of the finite element scheme, we write the twolocal elementwise schemes as
Mk dukh
dt! (Sk)T fk
h !Mkgkh = !f!(xk+1)!k(xk+1) + f!(xk)!k(xk)
and
Mk dukh
dt+ Skfk
h !Mkgkh = (fk
h (xk+1) ! f!(xk+1))!k(xk+1)
!(fkh (xk) ! f!(xk))!k(xk). (1.5)
Here we have the vectors of local unknowns, ukh, of fluxes, fk
h, and the forc-ing, gk
h, all given on the nodes in each element. Given the duplication ofunknowns at the element interfaces, each vector is 2K long. Furthermore, wehave !k(x) = [!k
1(x), . . . , !kNp
(x)]T and the local matrices
Mkij =
!
Dk!ki (x)!k
j (x) dx, Skij =
!
Dk!ki (x)
d!kj
dxdx.
While the structure of the DG-FEM is very similar to that of the finite ele-ment method (FEM), there are several fundamental di!erences. In particular,the mass matrix is local rather than global and thus can be inverted at verylittle cost, yielding a semidiscrete scheme that is explicit. Furthermore, bycarefully designing the numerical flux to reflect the underlying dynamics, onehas more flexibility than in the classic FEM to ensure stability for wave-dominated problems. Compared with the FVM, the DG-FEM overcomes thekey limitation on achieving high-order accuracy on general grids by enablingthis through the local element-based basis. This is all achieved while main-taining benefits such as local conservation and flexibility in the choice of thenumerical flux.
All of this, however, comes at a price – most notably through an increasein the total degrees of freedom as a direct result of the decoupling of theelements. For linear elements, this yields a doubling in the total number ofdegrees of freedom compared to the continuous FEM as discussed above. Forcertain problems, this clearly becomes an issue of significant importance; thatis, if we seek a steady solution and need to invert the discrete operator Sin Eq. (1.5) then the associated computational work scales directly with thesize of the matrix. Furthermore, for problems where the flexibility in theflux choices and the locality of scheme is of less importance (e.g., for ellipticproblems), the DG-FEM is not as e"cient as a better suited method like theFEM.
On the other hand, for the DG-FEM, the operator tends to be more sparsethan for a FEM of similar order, potentially leading to a faster solution pro-cedure. This overhead, caused by the doubling of degrees of freedom along
1 Introduction 9
of the scheme and is also where one can introduce knowledge of the dynam-ics of the problem [e.g., by upwinding through the flux as in a finite volumemethod (FVM)].
To mimic the terminology of the finite element scheme, we write the twolocal elementwise schemes as
Mk dukh
dt! (Sk)T fk
h !Mkgkh = !f!(xk+1)!k(xk+1) + f!(xk)!k(xk)
and
Mk dukh
dt+ Skfk
h !Mkgkh = (fk
h (xk+1) ! f!(xk+1))!k(xk+1)
!(fkh (xk) ! f!(xk))!k(xk). (1.5)
Here we have the vectors of local unknowns, ukh, of fluxes, fk
h, and the forc-ing, gk
h, all given on the nodes in each element. Given the duplication ofunknowns at the element interfaces, each vector is 2K long. Furthermore, wehave !k(x) = [!k
1(x), . . . , !kNp
(x)]T and the local matrices
Mkij =
!
Dk!ki (x)!k
j (x) dx, Skij =
!
Dk!ki (x)
d!kj
dxdx.
While the structure of the DG-FEM is very similar to that of the finite ele-ment method (FEM), there are several fundamental di!erences. In particular,the mass matrix is local rather than global and thus can be inverted at verylittle cost, yielding a semidiscrete scheme that is explicit. Furthermore, bycarefully designing the numerical flux to reflect the underlying dynamics, onehas more flexibility than in the classic FEM to ensure stability for wave-dominated problems. Compared with the FVM, the DG-FEM overcomes thekey limitation on achieving high-order accuracy on general grids by enablingthis through the local element-based basis. This is all achieved while main-taining benefits such as local conservation and flexibility in the choice of thenumerical flux.
All of this, however, comes at a price – most notably through an increasein the total degrees of freedom as a direct result of the decoupling of theelements. For linear elements, this yields a doubling in the total number ofdegrees of freedom compared to the continuous FEM as discussed above. Forcertain problems, this clearly becomes an issue of significant importance; thatis, if we seek a steady solution and need to invert the discrete operator Sin Eq. (1.5) then the associated computational work scales directly with thesize of the matrix. Furthermore, for problems where the flexibility in theflux choices and the locality of scheme is of less importance (e.g., for ellipticproblems), the DG-FEM is not as e"cient as a better suited method like theFEM.
On the other hand, for the DG-FEM, the operator tends to be more sparsethan for a FEM of similar order, potentially leading to a faster solution pro-cedure. This overhead, caused by the doubling of degrees of freedom along
Weak:
Strong:
To simplify the notation, introduce
8 1 Introduction
Here we have the vectors of local unknowns, uk, of fluxes, fk, and the forcing, gk, all given on thenodes in each element. Given the duplication of unknowns at the element interfaces, each vector is2K long. Furthermore, we have k(x) = [k
1(x), . . . , kNp
(x)]T and the local matrices
Mkij =
Dkki (x)k
j (x) dx, Skij =
Dkki (x)
dkj
dxdx.
While the structure of the DG-FEM is very similar to that of the finite element method (FEM), thereare several fundamental differences. In particular, the mass matrix is local rather than global andthus can be inverted at very little cost, yielding a semidiscrete scheme that is explicit. Furthermore,by carefully designing the numerical flux to reflect the underlying dynamics, one has more flexibilitythan in the classic FEM to ensure stability for wave-dominated problems. Compared with the FVM,the DG-FEM overcomes the key limitation on achieving high-order accuracy on general grids byenabling this through the local element-based basis. This is all achieved while maintaining benefitssuch as local conservation and flexibility in the choice of the numerical flux.
All of this, however, comes at a price – most notably through an increase in the total degreesof freedom as a direct result of the decoupling of the elements. For linear elements, this yields adoubling in the total number of degrees of freedom compared to the continuous FEM as discussedabove. For certain problems, this clearly becomes an issue of significant importance; that is, if weseek a steady solution and need to invert the discrete operator S in Eq.(1.5) then the associatedcomputational work scales directly with the size of the matrix. Furthermore, for problems wherethe flexibility in the flux choices and the locality of scheme is of less importance (e.g., for ellipticproblems), the DG-FEM is not as efficient as a better suited method like the FEM.
On the other hand, for the DG-FEM, the operator tends to be more sparse than for a FEMof similar order, potentially leading to a faster solution procedure. This overhead, caused by thedoubling of degrees of freedom along element interfaces, decreases as the order of elements increase,thus reducing the penalty of the doubling of the interface nodes somewhat. In Table 1.1 we alsosummarize the strengths and weaknesses of the DG-FEM. This simple – and simpleminded – com-parative discussion also highlights that time dependent wave-dominated problems emerge as themain candidates for problems where DG-FEM will be advantageous. As we will see shortly, this isalso where they have achieved most prominence and widespread use during the last decade.
1.1 A brief account of history
The discontinuous Galerkin finite element method (DG-FEM) appears to have been proposed firstin [269] as a way of solving the steady-state neutron transport equation
σu +∇ · (au) = f,
where σ is a real constant, a(x) is piecewise constant, and u is the unknown. The first analysis ofthis method was presented in [217], showing O(hN )-convergence on a general triangulated grid andoptimal convergence rate, O(hN+1), on a Cartesian grid of cell size h and with a local polynomialapproximation of order N . This result was later improved in [190] to O(hN+1/2)-convergence ongeneral grids. The optimality of this convergence rate was subsequently confirmed in [257] by a spe-cial example. These results assume smooth solutions, whereas the linear problems with nonsmoothsolutions were analyzed in [63, 225]. Techniques for postprocessing on Cartesian grids to enhance
to obtain the two basic forms of DG-FEM
Monday, June 7, 2010
.. but how do we bring this to life ?
We should seek
‣ A flexible and generic framework for solving 1D problems using DG-FEM
‣ Easy maintenance and setup
‣ Separation of grids, discretization, and physics
‣ Easy implementation
Requirements
What do we want?
! A flexible and generic framework useful for solving di!erentproblems
! Easy maintenance by a component-based setup
! Splitting of the mesh and solver problems for reusability
! Easy implementation
Let’s see how this can be achieved...
3 / 58
Let’s see if we can achieve thisMonday, June 7, 2010
The local approximation
We recall that we have
where the computational domain is
Domain of interest
We want to solve a given problem in a domain ! which isapproximated by the union of K nonoverlapping local elements Dk ,k = 1, ..., K such that
! != !h =K!
k=1
Dk
Thus, we need to deal with implementation issues with respect tothe local elements and how they are related.
The shape of the local elements can in principle be of any shape,however, in practice we mostly consider d-dimensional simplexes(e.g. triangles in two dimensions).
4 / 58
Sketch and notations for a one-dimensional domain
Consider a one-dimensional domain defined on x ! [L, R]
xDk!1 Dk Dk+1
hk+1
L = x1l xk!1
r = xkl xk
r = xk+1l xK
r = R
5 / 58
Monday, June 7, 2010
The local approximation
3
Making it work in one dimension
As simple as the formulations in the last chapter appear, there is often aleap between mathematical formulations and an actual implementation of thealgorithms. This is particularly true when one considers important issues suchas e!ciency, flexibility, and robustness of the resulting methods.
In this chapter we address these issues by first discussing details such asthe form of the local basis and, subsequently, how one implements the nodalDG-FEMs in a flexible way. To keep things simple, we continue the emphasison one-dimensional linear problems, although this results in a few apparentlyunnecessarily complex constructions. We ask the reader to bear with us, asthis slightly more general approach will pay o" when we begin to considermore complex nonlinear and/or higher-dimensional problems.
3.1 Legendre polynomials and nodal elements
In Chapter 2 we started out by assuming that one can represent the globalsolution as a the direct sum of local piecewise polynomial solution as
u(x, t) ! uh(x, t) =K!
k=1
ukh(xk, t).
The careful reader will note that this notation is a bit careless, as we do notaddress what exactly happens at the overlapping interfaces. However, a morecareful definition does not add anything essential at this point and we will usethis notation to reflect that the global solution is obtained by combining theK local solutions as defined by the scheme.
The local solutions are assumed to be of the form
x " Dk = [xkl , xk
r ] : ukh(x, t) =
Np"
n=1
ukn(t)!n(x) =
Np"
i=1
ukh(xk
i , t)"ki (x).
On each element we have
using either a modal or a nodal representation
We introduce an affine mapping as
Local approximation in 1D
On each of the local elements, we choose to represent the solutionlocally as a polynomial of arbitrary order N = Np ! 1 as
x " Dk : ukh (x(r), t) =
Np!
n=1
ukn (t)!n(r) =
Np!
i=1
uki (t)lki (r)
using either a modal or nodal representation on r " I ([!1, 1]).
We have introduced the a!ne mapping from the standard elementI to the k ’th element Dk
x " Dk : x(r) = xkl +
1 + r
2hk , hk = xk
r ! xkl
Dk
xkl xk
r
I
!1 r 1
6 / 58Monday, June 7, 2010
The local approximation
Modal representation is based on Legendre polynomials
Local approximation in 1D
The modal representation is usually based on a hierachical modalbasis, e.g. the normalized Legendre polynomials from the class oforthogonal Jacobi polynomials
!n(r) = Pn!1(r) =Pn!1(r)!
"n!1, "n =
2
2n + 1
Note, the orthogonal property implies that for a normalized basis
! 1
!1Pi (r)Pj(r)dr = #ij
The nodal representation is based on the interpolating Lagrangepolynomials, which enjoys the Cardinal property
lki (x) = #ij , #ij =
"
0 , i "= j
1 , i = j
7 / 58
Local approximation in 1D
The modal representation is usually based on a hierachical modalbasis, e.g. the normalized Legendre polynomials from the class oforthogonal Jacobi polynomials
!n(r) = Pn!1(r) =Pn!1(r)!
"n!1, "n =
2
2n + 1
Note, the orthogonal property implies that for a normalized basis
! 1
!1Pi (r)Pj(r)dr = #ij
The nodal representation is based on the interpolating Lagrangepolynomials, which enjoys the Cardinal property
lki (x) = #ij , #ij =
"
0 , i "= j
1 , i = j
7 / 58
Local approximation in 1D - modesInterpolating normalized Legendre polynomials !n(r) = Pn!1(r).
−1 0 1−202 !1(r)
−1 0 1−202 !2(r)
−1 0 1−202 !3(r)
−1 0 1−202 !4(r)
−1 0 1−202 !5(r)
−1 0 1−202 !6(r)
8 / 58
Monday, June 7, 2010
The local approximation
Nodal representation based on Lagrange polynomials
Local approximation in 1D
The modal representation is usually based on a hierachical modalbasis, e.g. the normalized Legendre polynomials from the class oforthogonal Jacobi polynomials
!n(r) = Pn!1(r) =Pn!1(r)!
"n!1, "n =
2
2n + 1
Note, the orthogonal property implies that for a normalized basis
! 1
!1Pi (r)Pj(r)dr = #ij
The nodal representation is based on the interpolating Lagrangepolynomials, which enjoys the Cardinal property
lki (x) = #ij , #ij =
"
0 , i "= j
1 , i = j
7 / 58
Local approximation in 1D - nodesInterpolating Lagrange polynomials ln(r) based on theLegendre-Gauss-Lobatto (LGL) nodes.
−1 0 1−1012
l1(r)
−1 0 1−1012
l2(r)
−1 0 1−1012
l3(r)
−1 0 1−1012
l4(r)
−1 0 1−1012
l5(r)
−1 0 1−1012
l6(r)
9 / 58Monday, June 7, 2010
Local approximation in 1DFor stable and accurate computations we need to ensure that thegeneralized Vandermonde matrix V is well-conditioned. Thisimplies minimizing the Lebesque constant
! = maxr
Np!
i=1
|li (r)|
and maximizing Det V.
0 10 20 30 4010−10
100
1010
1020
1030
Det
V
N0 10 20 30 40
100
102
104
106
108
1010
!(V)
N
EquiNormalized LGL
11 / 58
The local approximation
These two representations are connected as
The local approximation
2
The key ideas
2.1 Briefly on notation
While we initially strive to stay away from complex notation a few funda-mentals are needed. We consider problems posed on the physical domain !with boundary "! and assume that this domain is well approximated bythe computational domain !h. This is a space filling triangulation composedof a collection of K geometry-conforming nonoverlapping elements, Dk. Theshape of these elements can be arbitrary although we will mostly considercases where they are d-dimensional simplexes.
We define the local inner product and L2(Dk) norm
(u, v)Dk =!
Dkuv dx, !u!2
Dk = (u, u)Dk ,
as well as the global broken inner product and norm
(u, v)!,h =K"
k=1
(u, v)Dk , !u!2!,h = (u, u)!,h .
Here, (!, h) reflects that ! is only approximated by the union of Dk, that is
! " !h =K#
k=1
Dk,
although we will not distinguish the two domains unless needed.Generally, one has local information as well as information from the neigh-
boring element along an intersection between two elements. Often we will referto the union of these intersections in an element as the trace of the element.For the methods we discuss here, we will have two or more solutions or bound-ary conditions at the same physical location along the trace of the element.
16 2 The key ideas
where u can be both a scalar and a vector. In a similar fashion we also define the jumps along anormal, n, as
[[u]] = n−u− + n+u+, [[u]] = n− · u− + n+ · u+.
Note that it is defined differently depending on whether u is a scalar or a vector, u.
2.2 Basic elements of the schemes
In the following, we introduce the key ideas behind the family of discontinuous element methodsthat are the main topic of this text. Before getting into generalizations and abstract ideas, let usdevelop a basic understanding of the schemes through a simple example.
2.2.1 The first schemes
Consider the linear scalar wave equation
∂u
∂t+
∂f(u)∂x
= 0, x ∈ [L,R] = Ω, (2.1)
where the linear flux is given as f(u) = au. This is subject to the appropriate initial conditions
u(x, 0) = u0(x).
Boundary conditions are given when the boundary is an inflow boundary, that is
u(L, t) = g(t) if a ≥ 0,u(R, t) = g(t) if a ≤ 0.
We approximate Ω by K nonoverlapping elements, x ∈ [xkl , xk
r ] = Dk, as illustrated in Fig. 2.1. Oneach of these elements we express the local solution as a polynomial of order N
x ∈ Dk : ukh(x, t) =
Np
n=1
ukn(t)ψn(x) =
Np
i=1
ukh(xk
i , t)ki (x).
Here, we have introduced two complementary expressions for the local solution. In the first one,known as the modal form, we use a local polynomial basis, ψn(x). A simple example of this couldbe ψn(x) = xn−1. In the alternative form, known as the nodal representation, we introduce Np =N + 1 local grid points, xk
i ∈ Dk, and express the polynomial through the associated interpolatingLagrange polynomial, k
i (x). The connection between these two forms is through the definition ofthe expansion coefficients, uk
n. We return to a discussion of these choices in much more detail later;for now it suffices to assume that we have chosen one of these representations.
The global solution u(x, t) is then assumed to be approximated by the piecewise N -th orderpolynomial approximation uh(x, t),
u(x, t) uh(x, t) =K
k=1
ukh(x, t),
3
Making it work in one dimension
As simple as the formulations in the last chapter appear, there is often aleap between mathematical formulations and an actual implementation of thealgorithms. This is particularly true when one considers important issues suchas e!ciency, flexibility, and robustness of the resulting methods.
In this chapter we address these issues by first discussing details such asthe form of the local basis and, subsequently, how one implements the nodalDG-FEMs in a flexible way. To keep things simple, we continue the emphasison one-dimensional linear problems, although this results in a few apparentlyunnecessarily complex constructions. We ask the reader to bear with us, asthis slightly more general approach will pay o" when we begin to considermore complex nonlinear and/or higher-dimensional problems.
3.1 Legendre polynomials and nodal elements
In Chapter 2 we started out by assuming that one can represent the globalsolution as a the direct sum of local piecewise polynomial solution as
u(x, t) ! uh(x, t) =K!
k=1
ukh(xk, t).
The careful reader will note that this notation is a bit careless, as we do notaddress what exactly happens at the overlapping interfaces. However, a morecareful definition does not add anything essential at this point and we will usethis notation to reflect that the global solution is obtained by combining theK local solutions as defined by the scheme.
The local solutions are assumed to be of the form
x " Dk = [xkl , xk
r ] : ukh(x, t) =
Np"
n=1
ukn(t)!n(x) =
Np"
i=1
ukh(xk
i , t)"ki (x).
50 3 Making it work in one dimension
!1.25 !1 !0.75 !0.5 !0.25 0 0.25!1
0123456789
10
!i
"
0 2 4 6 8 10100
105
1010
1015
1020
1025
"
Det
erm
inan
t of V
N=4N=8
N=16
N=24
N=32
Fig. 3.3. On the left is shown the location of the left half of the Jacobi-Gauss-Lobatto nodes for N = 16 as a function of !, the type of the polynomial. Note that! = 0 corresponds to the LGL nodes. On the right is shown the behavior of theVandermonde determinant as a function of ! for di!erent values of N , confirmingthe optimality of the LGL nodes but also showing robustness of the interpolationto this choice.
decay in the value of the determinant, indicating possible problems in theinterpolation. Considering the corresponding nodal distribution in Fig. 3.3for these values of !, this is perhaps not surprising.
This highlights that it is the overall structure of the nodes rather thanthe details of the individual node position that is important; for example, onecould optimize these nodal sets for various applications.
To summarize matters, we have local approximations of the form
u(r) ! uh(r) =Np!
n=1
unPn!1(r) =Np!
i=1
u(ri)"i(r), (3.3)
where #i = ri are the Legendre-Gauss-Lobatto quadrature points. A centralcomponent of this construction is the Vandermonde matrix, V, which estab-lishes the connections
u = Vu, VT !(r) = P (r), Vij = Pj(ri).
By carefully choosing the orthonormal Legendre basis, Pn(r), and the nodalpoints, ri, we have ensured that V is a well-conditioned object and that theresulting interpolation is well behaved. A script for initializing V is given inVandermonde1D.m.
50 3 Making it work in one dimension
!1.25 !1 !0.75 !0.5 !0.25 0 0.25!1
0123456789
10
!i
"
0 2 4 6 8 10100
105
1010
1015
1020
1025
"
Det
erm
inan
t of V
N=4N=8
N=16
N=24
N=32
Fig. 3.3. On the left is shown the location of the left half of the Jacobi-Gauss-Lobatto nodes for N = 16 as a function of !, the type of the polynomial. Note that! = 0 corresponds to the LGL nodes. On the right is shown the behavior of theVandermonde determinant as a function of ! for di!erent values of N , confirmingthe optimality of the LGL nodes but also showing robustness of the interpolationto this choice.
decay in the value of the determinant, indicating possible problems in theinterpolation. Considering the corresponding nodal distribution in Fig. 3.3for these values of !, this is perhaps not surprising.
This highlights that it is the overall structure of the nodes rather thanthe details of the individual node position that is important; for example, onecould optimize these nodal sets for various applications.
To summarize matters, we have local approximations of the form
u(r) ! uh(r) =Np!
n=1
unPn!1(r) =Np!
i=1
u(ri)"i(r), (3.3)
where #i = ri are the Legendre-Gauss-Lobatto quadrature points. A centralcomponent of this construction is the Vandermonde matrix, V, which estab-lishes the connections
u = Vu, VT !(r) = P (r), Vij = Pj(ri).
By carefully choosing the orthonormal Legendre basis, Pn(r), and the nodalpoints, ri, we have ensured that V is a well-conditioned object and that theresulting interpolation is well behaved. A script for initializing V is given inVandermonde1D.m.
44 3 Making it work in one dimension
We did not, however, discuss the specifics of this representation, as this is lessimportant from a theoretical point of view. The results in Example 2.4 clearlyillustrate, however, that the accuracy of the method is closely linked to theorder of the local polynomial representation and some care is warranted whenchoosing this.
Let us begin by introducing the a!ne mapping
x ! Dk : x(r) = xkl +
1 + r
2hk, hk = xk
r " xkl , (3.1)
with the reference variable r ! I = ["1, 1]. We consider local polynomialrepresentations of the form
x ! Dk : ukh(x(r), t) =
Np!
n=1
ukn(t)!n(r) =
Np!
i=1
ukh(xk
i , t)"ki (r).
Let us first discuss the local modal expansion,
uh(r) =Np!
n=1
un!n(r).
where we have dropped the superscripts for element k and the explicit timedependence, t, for clarity of notation.
As a first choice, one could consider !n(r) = rn!1 (i.e., the simple mono-mial basis). This leaves only the question of how to recover un. A natural wayis by an L2-projection; that is by requiring that
(u(r),!m(r))I =Np!
n=1
un (!n(r),!m(r))I ,
for each of the Np basis functions !n. We have introduced the inner producton the interval I as
(u, v)I =" 1
!1uv dx.
This yieldsMu = u,
whereMij = (!i,!j)I , u = [u1, . . . , uNp ]T , ui = (u,!i)I ,
leading to Np equations for the Np unknown expansion coe!cients, ui. How-ever, note that
Mij =1
i + j " 1#1 + ("1)i+j
$, (3.2)
which resembles a Hilbert matrix, known to be very poorly conditioned. If wecompute the condition number, #(M), for M for increasing order of approx-imation, N , we observe in Table 3.1 the very rapidly deteriorating condition-ing of M. The reason for this is evident in Eq. (3.2), where the coe!cient
r ∈ [−1, 1]
We recall that we have
To make this accurate, we have made some choices
The local approximation
2
The key ideas
2.1 Briefly on notation
While we initially strive to stay away from complex notation a few funda-mentals are needed. We consider problems posed on the physical domain !with boundary "! and assume that this domain is well approximated bythe computational domain !h. This is a space filling triangulation composedof a collection of K geometry-conforming nonoverlapping elements, Dk. Theshape of these elements can be arbitrary although we will mostly considercases where they are d-dimensional simplexes.
We define the local inner product and L2(Dk) norm
(u, v)Dk =!
Dkuv dx, !u!2
Dk = (u, u)Dk ,
as well as the global broken inner product and norm
(u, v)!,h =K"
k=1
(u, v)Dk , !u!2!,h = (u, u)!,h .
Here, (!, h) reflects that ! is only approximated by the union of Dk, that is
! " !h =K#
k=1
Dk,
although we will not distinguish the two domains unless needed.Generally, one has local information as well as information from the neigh-
boring element along an intersection between two elements. Often we will referto the union of these intersections in an element as the trace of the element.For the methods we discuss here, we will have two or more solutions or bound-ary conditions at the same physical location along the trace of the element.
16 2 The key ideas
where u can be both a scalar and a vector. In a similar fashion we also define the jumps along anormal, n, as
[[u]] = n−u− + n+u+, [[u]] = n− · u− + n+ · u+.
Note that it is defined differently depending on whether u is a scalar or a vector, u.
2.2 Basic elements of the schemes
In the following, we introduce the key ideas behind the family of discontinuous element methodsthat are the main topic of this text. Before getting into generalizations and abstract ideas, let usdevelop a basic understanding of the schemes through a simple example.
2.2.1 The first schemes
Consider the linear scalar wave equation
∂u
∂t+
∂f(u)∂x
= 0, x ∈ [L,R] = Ω, (2.1)
where the linear flux is given as f(u) = au. This is subject to the appropriate initial conditions
u(x, 0) = u0(x).
Boundary conditions are given when the boundary is an inflow boundary, that is
u(L, t) = g(t) if a ≥ 0,u(R, t) = g(t) if a ≤ 0.
We approximate Ω by K nonoverlapping elements, x ∈ [xkl , xk
r ] = Dk, as illustrated in Fig. 2.1. Oneach of these elements we express the local solution as a polynomial of order N
x ∈ Dk : ukh(x, t) =
Np
n=1
ukn(t)ψn(x) =
Np
i=1
ukh(xk
i , t)ki (x).
Here, we have introduced two complementary expressions for the local solution. In the first one,known as the modal form, we use a local polynomial basis, ψn(x). A simple example of this couldbe ψn(x) = xn−1. In the alternative form, known as the nodal representation, we introduce Np =N + 1 local grid points, xk
i ∈ Dk, and express the polynomial through the associated interpolatingLagrange polynomial, k
i (x). The connection between these two forms is through the definition ofthe expansion coefficients, uk
n. We return to a discussion of these choices in much more detail later;for now it suffices to assume that we have chosen one of these representations.
The global solution u(x, t) is then assumed to be approximated by the piecewise N -th orderpolynomial approximation uh(x, t),
u(x, t) uh(x, t) =K
k=1
ukh(x, t),
3
Making it work in one dimension
As simple as the formulations in the last chapter appear, there is often aleap between mathematical formulations and an actual implementation of thealgorithms. This is particularly true when one considers important issues suchas e!ciency, flexibility, and robustness of the resulting methods.
In this chapter we address these issues by first discussing details such asthe form of the local basis and, subsequently, how one implements the nodalDG-FEMs in a flexible way. To keep things simple, we continue the emphasison one-dimensional linear problems, although this results in a few apparentlyunnecessarily complex constructions. We ask the reader to bear with us, asthis slightly more general approach will pay o" when we begin to considermore complex nonlinear and/or higher-dimensional problems.
3.1 Legendre polynomials and nodal elements
In Chapter 2 we started out by assuming that one can represent the globalsolution as a the direct sum of local piecewise polynomial solution as
u(x, t) ! uh(x, t) =K!
k=1
ukh(xk, t).
The careful reader will note that this notation is a bit careless, as we do notaddress what exactly happens at the overlapping interfaces. However, a morecareful definition does not add anything essential at this point and we will usethis notation to reflect that the global solution is obtained by combining theK local solutions as defined by the scheme.
The local solutions are assumed to be of the form
x " Dk = [xkl , xk
r ] : ukh(x, t) =
Np"
n=1
ukn(t)!n(x) =
Np"
i=1
ukh(xk
i , t)"ki (x).
50 3 Making it work in one dimension
!1.25 !1 !0.75 !0.5 !0.25 0 0.25!1
0123456789
10
!i
"
0 2 4 6 8 10100
105
1010
1015
1020
1025
"
Det
erm
inan
t of V
N=4N=8
N=16
N=24
N=32
Fig. 3.3. On the left is shown the location of the left half of the Jacobi-Gauss-Lobatto nodes for N = 16 as a function of !, the type of the polynomial. Note that! = 0 corresponds to the LGL nodes. On the right is shown the behavior of theVandermonde determinant as a function of ! for di!erent values of N , confirmingthe optimality of the LGL nodes but also showing robustness of the interpolationto this choice.
decay in the value of the determinant, indicating possible problems in theinterpolation. Considering the corresponding nodal distribution in Fig. 3.3for these values of !, this is perhaps not surprising.
This highlights that it is the overall structure of the nodes rather thanthe details of the individual node position that is important; for example, onecould optimize these nodal sets for various applications.
To summarize matters, we have local approximations of the form
u(r) ! uh(r) =Np!
n=1
unPn!1(r) =Np!
i=1
u(ri)"i(r), (3.3)
where #i = ri are the Legendre-Gauss-Lobatto quadrature points. A centralcomponent of this construction is the Vandermonde matrix, V, which estab-lishes the connections
u = Vu, VT !(r) = P (r), Vij = Pj(ri).
By carefully choosing the orthonormal Legendre basis, Pn(r), and the nodalpoints, ri, we have ensured that V is a well-conditioned object and that theresulting interpolation is well behaved. A script for initializing V is given inVandermonde1D.m.
50 3 Making it work in one dimension
!1.25 !1 !0.75 !0.5 !0.25 0 0.25!1
0123456789
10
!i
"
0 2 4 6 8 10100
105
1010
1015
1020
1025
"
Det
erm
inan
t of V
N=4N=8
N=16
N=24
N=32
Fig. 3.3. On the left is shown the location of the left half of the Jacobi-Gauss-Lobatto nodes for N = 16 as a function of !, the type of the polynomial. Note that! = 0 corresponds to the LGL nodes. On the right is shown the behavior of theVandermonde determinant as a function of ! for di!erent values of N , confirmingthe optimality of the LGL nodes but also showing robustness of the interpolationto this choice.
decay in the value of the determinant, indicating possible problems in theinterpolation. Considering the corresponding nodal distribution in Fig. 3.3for these values of !, this is perhaps not surprising.
This highlights that it is the overall structure of the nodes rather thanthe details of the individual node position that is important; for example, onecould optimize these nodal sets for various applications.
To summarize matters, we have local approximations of the form
u(r) ! uh(r) =Np!
n=1
unPn!1(r) =Np!
i=1
u(ri)"i(r), (3.3)
where #i = ri are the Legendre-Gauss-Lobatto quadrature points. A centralcomponent of this construction is the Vandermonde matrix, V, which estab-lishes the connections
u = Vu, VT !(r) = P (r), Vij = Pj(ri).
By carefully choosing the orthonormal Legendre basis, Pn(r), and the nodalpoints, ri, we have ensured that V is a well-conditioned object and that theresulting interpolation is well behaved. A script for initializing V is given inVandermonde1D.m.
44 3 Making it work in one dimension
We did not, however, discuss the specifics of this representation, as this is lessimportant from a theoretical point of view. The results in Example 2.4 clearlyillustrate, however, that the accuracy of the method is closely linked to theorder of the local polynomial representation and some care is warranted whenchoosing this.
Let us begin by introducing the a!ne mapping
x ! Dk : x(r) = xkl +
1 + r
2hk, hk = xk
r " xkl , (3.1)
with the reference variable r ! I = ["1, 1]. We consider local polynomialrepresentations of the form
x ! Dk : ukh(x(r), t) =
Np!
n=1
ukn(t)!n(r) =
Np!
i=1
ukh(xk
i , t)"ki (r).
Let us first discuss the local modal expansion,
uh(r) =Np!
n=1
un!n(r).
where we have dropped the superscripts for element k and the explicit timedependence, t, for clarity of notation.
As a first choice, one could consider !n(r) = rn!1 (i.e., the simple mono-mial basis). This leaves only the question of how to recover un. A natural wayis by an L2-projection; that is by requiring that
(u(r),!m(r))I =Np!
n=1
un (!n(r),!m(r))I ,
for each of the Np basis functions !n. We have introduced the inner producton the interval I as
(u, v)I =" 1
!1uv dx.
This yieldsMu = u,
whereMij = (!i,!j)I , u = [u1, . . . , uNp ]T , ui = (u,!i)I ,
leading to Np equations for the Np unknown expansion coe!cients, ui. How-ever, note that
Mij =1
i + j " 1#1 + ("1)i+j
$, (3.2)
which resembles a Hilbert matrix, known to be very poorly conditioned. If wecompute the condition number, #(M), for M for increasing order of approx-imation, N , we observe in Table 3.1 the very rapidly deteriorating condition-ing of M. The reason for this is evident in Eq. (3.2), where the coe!cient
r ∈ [−1, 1]
We recall that we have
To make this accurate, we have made some choices
Conditioning of V is determined by choice of points
Optimal points: Legendre Gauss Lobatto points
Important for robustness: Orthogonal local basis and
good points for interpolation
Monday, June 7, 2010
The local approximation
To implement a typical semi-discrete scheme
8 1 Introduction
Here we have the vectors of local unknowns, uk, of fluxes, fk, and the forcing, gk, all given on thenodes in each element. Given the duplication of unknowns at the element interfaces, each vector is2K long. Furthermore, we have k(x) = [k
1(x), . . . , kNp
(x)]T and the local matrices
Mkij =
Dkki (x)k
j (x) dx, Skij =
Dkki (x)
dkj
dxdx.
While the structure of the DG-FEM is very similar to that of the finite element method (FEM), thereare several fundamental differences. In particular, the mass matrix is local rather than global andthus can be inverted at very little cost, yielding a semidiscrete scheme that is explicit. Furthermore,by carefully designing the numerical flux to reflect the underlying dynamics, one has more flexibilitythan in the classic FEM to ensure stability for wave-dominated problems. Compared with the FVM,the DG-FEM overcomes the key limitation on achieving high-order accuracy on general grids byenabling this through the local element-based basis. This is all achieved while maintaining benefitssuch as local conservation and flexibility in the choice of the numerical flux.
All of this, however, comes at a price – most notably through an increase in the total degreesof freedom as a direct result of the decoupling of the elements. For linear elements, this yields adoubling in the total number of degrees of freedom compared to the continuous FEM as discussedabove. For certain problems, this clearly becomes an issue of significant importance; that is, if weseek a steady solution and need to invert the discrete operator S in Eq.(1.5) then the associatedcomputational work scales directly with the size of the matrix. Furthermore, for problems wherethe flexibility in the flux choices and the locality of scheme is of less importance (e.g., for ellipticproblems), the DG-FEM is not as efficient as a better suited method like the FEM.
On the other hand, for the DG-FEM, the operator tends to be more sparse than for a FEMof similar order, potentially leading to a faster solution procedure. This overhead, caused by thedoubling of degrees of freedom along element interfaces, decreases as the order of elements increase,thus reducing the penalty of the doubling of the interface nodes somewhat. In Table 1.1 we alsosummarize the strengths and weaknesses of the DG-FEM. This simple – and simpleminded – com-parative discussion also highlights that time dependent wave-dominated problems emerge as themain candidates for problems where DG-FEM will be advantageous. As we will see shortly, this isalso where they have achieved most prominence and widespread use during the last decade.
1.1 A brief account of history
The discontinuous Galerkin finite element method (DG-FEM) appears to have been proposed firstin [269] as a way of solving the steady-state neutron transport equation
σu +∇ · (au) = f,
where σ is a real constant, a(x) is piecewise constant, and u is the unknown. The first analysis ofthis method was presented in [217], showing O(hN )-convergence on a general triangulated grid andoptimal convergence rate, O(hN+1), on a Cartesian grid of cell size h and with a local polynomialapproximation of order N . This result was later improved in [190] to O(hN+1/2)-convergence ongeneral grids. The optimality of this convergence rate was subsequently confirmed in [257] by a spe-cial example. These results assume smooth solutions, whereas the linear problems with nonsmoothsolutions were analyzed in [63, 225]. Techniques for postprocessing on Cartesian grids to enhance
We need several local operators
Consider first the local mass-matrix
3.2 Elementwise operations 51
Vandermonde1D.m
function [V1D] = Vandermonde1D(N,r)
% function [V1D] = Vandermonde1D(N,r)% Purpose : Initialize the 1D Vandermonde Matrix, V_ij = phi_j(r_i);
V1D = zeros(length(r),N+1);for j=1:N+1
V1D(:,j) = JacobiP(r(:), 0, 0, j-1);end;return
Using V, we can transform directly between modal representations, usingun as the unknowns, and nodal representations, using u(ri) as the unknowns.The two representations are mathematically equivalent but computationallydi!erent and they each have certain advantages and disadvantages, dependingon the application at hand.
In what remains, we will focus on the nodal representations, which hasadvantages for more complex problems as we will see later. However, it isimportant to keep the alternative in mind as an optional representation.
3.2 Elementwise operations
With the development of a suitable local representation of the solution, weare now prepared to discuss the various elementwise operators needed to im-plement the algorithms from Chapter 2.
We begin with the two operators Mk and Sk. For the former, the entriesare given as
Mkij =
! xkr
xkl
!ki (x)!k
j (x) dx =hk
2
! 1
!1!i(r)!j(r) dr =
hk
2(!i, !j)I =
hk
2Mij ,
where the coe"cient in front is the Jacobian coming from Eq. (3.1) and M isthe mass matrix defined on the reference element I.
Recall that
!i(r) =Np"
n=1
(VT )!1in Pn!1(r),
from which we recover
Mij =! 1
!1
Np"
n=1
(VT )!1in Pn!1(r)
Np"
m=1
(VT )!1jmPm!1(r) dr
=Np"
n=1
Np"
m=1
(VT )!1in (VT )!1
jm(Pn!1, Pm!1)I =Np"
n=1
(VT )!1in (VT )!1
jn ,
20 2 The key ideas
where boundary conditions are needed. Clearly, it is reasonable that the numerical approximation
to the wave equation displays similar behavior.
Assume that we use a nodal approach, i.e.,
x ∈ Dk: uk
h(x, t) =
Np
i=1
ukh(xk
i , t)ki (x),
and consider the strong form given as
Mk d
dtuk
h + Skauk
h
=
k
(x)(aukh − (auh)
∗)
xkr
xkl
. (2.8)
Here, we have introduced the nodal versions of the local operators
Mkij =
ki , k
j
Dk , Sk
ij =
ki ,
dkj
dx
Dk
.
First, realize that
uThMkuh =
Dk
Np
i=1
ukh(xk
i )ki (x)
Np
j=1
ukh(xk
j )kj (x) dx =
ukh
2
Dk ;
that is, it recovers the local energy. Furthermore, consider
uThSkuh =
Dk
Np
i=1
ukh(xk
i )ki (x)
Np
j=1
ukh(xk
j )dk
j
dxdx =
Dkuk
h(x)(ukh(x))
dx =1
2[(uk
h)2]xk
r
xkl.
Thus, it mimics an integration-by-parts form.
By multiplying Eq. (2.8) with uTh we directly recover
d
dt
ukh
2
Dk = −a[(ukh)
2]xk
r
xkl
+ 2uk
h(aukh − (au)
∗)xk
r
xkl. (2.9)
For stability, we must, as for the original equation, require that
K
k=1
d
dt
ukh
2
Dk =d
dtuh2Ω,h ≤ 0.
It suffices to control the terms associated with the coupling of the interfaces, each of which looks
like
n− ·au2
h(x−)− 2uh(x−)(au)∗(x−)
+ n+ ·
au2
h(x+)− 2uh(x+
)(au)∗(x+
)≤ 0
at every interface. Here (x−, x+) refers to the left (e.g., xk
r ), and right side (e.g., xk+1l ) of the
interface, respectively. We also have that n−= −n+
.
Let us consider a numerical flux like
f∗ = (au)∗
= au+ |a|1− α
2[[u]].
Monday, June 7, 2010
The local approximation
Recall first
Element-wise operationsEarlier, a relationship between the coe!cients of our modal andnodal expansions was found
u = Vu
Also, we can relate the modal and nodal bases through
uT l(r) = uT!(r)
These relationships can be combined to relate the basis functionsthrough V
uT l(r) = uT!(r)
! (Vu)T l(r) = uT!(r)
!uTVT l(r) = uT!(r)
!VT l(r) = !(r)
17 / 58
We also have
Element-wise operationsEarlier, a relationship between the coe!cients of our modal andnodal expansions was found
u = Vu
Also, we can relate the modal and nodal bases through
uT l(r) = uT!(r)
These relationships can be combined to relate the basis functionsthrough V
uT l(r) = uT!(r)
! (Vu)T l(r) = uT!(r)
!uTVT l(r) = uT!(r)
!VT l(r) = !(r)
17 / 58
This directly yields
Element-wise operationsEarlier, a relationship between the coe!cients of our modal andnodal expansions was found
u = Vu
Also, we can relate the modal and nodal bases through
uT l(r) = uT!(r)
These relationships can be combined to relate the basis functionsthrough V
uT l(r) = uT!(r)
! (Vu)T l(r) = uT!(r)
!uTVT l(r) = uT!(r)
!VT l(r) = !(r)
17 / 58
Monday, June 7, 2010
The local approximation
Using this, we recover
3.2 Elementwise operations 51
Vandermonde1D.m
function [V1D] = Vandermonde1D(N,r)
% function [V1D] = Vandermonde1D(N,r)% Purpose : Initialize the 1D Vandermonde Matrix, V_ij = phi_j(r_i);
V1D = zeros(length(r),N+1);for j=1:N+1
V1D(:,j) = JacobiP(r(:), 0, 0, j-1);end;return
Using V, we can transform directly between modal representations, usingun as the unknowns, and nodal representations, using u(ri) as the unknowns.The two representations are mathematically equivalent but computationallydi!erent and they each have certain advantages and disadvantages, dependingon the application at hand.
In what remains, we will focus on the nodal representations, which hasadvantages for more complex problems as we will see later. However, it isimportant to keep the alternative in mind as an optional representation.
3.2 Elementwise operations
With the development of a suitable local representation of the solution, weare now prepared to discuss the various elementwise operators needed to im-plement the algorithms from Chapter 2.
We begin with the two operators Mk and Sk. For the former, the entriesare given as
Mkij =
! xkr
xkl
!ki (x)!k
j (x) dx =hk
2
! 1
!1!i(r)!j(r) dr =
hk
2(!i, !j)I =
hk
2Mij ,
where the coe"cient in front is the Jacobian coming from Eq. (3.1) and M isthe mass matrix defined on the reference element I.
Recall that
!i(r) =Np"
n=1
(VT )!1in Pn!1(r),
from which we recover
Mij =! 1
!1
Np"
n=1
(VT )!1in Pn!1(r)
Np"
m=1
(VT )!1jmPm!1(r) dr
=Np"
n=1
Np"
m=1
(VT )!1in (VT )!1
jm(Pn!1, Pm!1)I =Np"
n=1
(VT )!1in (VT )!1
jn ,52 3 Making it work in one dimension
where the latter follows from orthonormality of Pn(r). Thus, we recover
Mk =hk
2M =
hk
2!VVT
"!1.
Let us now consider Sk, given as
Skij =
# xkr
xkl
!ki (x)
d!kj (x)dx
dx =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
Note in particular that no metric constant is introduced by the transformation.To realize a simple way to compute this, we define the di!erentiation matrix,Dr, with the entries
Dr,(i,j) =d!j
dr
$$$$ri
.
The motivation for this is found in Eq. (3.3), where we observe that Dr isthe operator that transforms point values, u(ri), to derivatives at these samepoints (e.g., u"
h = Druh).Consider now the product MDr, with entries
(MDr)(i,j) =Np%
n=1
MinDr,(n,j) =Np%
n=1
# 1
!1!i(r)!n(r)
d!j
dr
$$$$rn
dr
=# 1
!1!i(r)
Np%
n=1
d!j
dr
$$$$rn
!n(r) dr =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
In other words, we have the identity
MDr = S.
The entries of the di!erentiation matrix, Dr, can be found directly,
VT !(r) = P (r) ! VT d
dr!(r) =
d
drP (r),
leading to
VTDTr = (Vr)T , Vr,(i,j) =
dPj
dr
$$$$$ri
.
To compute the entries of Vr we use the identity
dPn
dr=
&n(n + 1)P (1,1)
n!1 (r),
where P (1,1)n!1 is the Jacobi polynomial (see Appendix A). This is implemented
in GradJacobiP.m to initialize the gradient of V evaluated at the grid points,r, as in GradVandermonde1D.m.
Consider now the local stiffness matrix
The local approximation
Using this, we recover
3.2 Elementwise operations 51
Vandermonde1D.m
function [V1D] = Vandermonde1D(N,r)
% function [V1D] = Vandermonde1D(N,r)% Purpose : Initialize the 1D Vandermonde Matrix, V_ij = phi_j(r_i);
V1D = zeros(length(r),N+1);for j=1:N+1
V1D(:,j) = JacobiP(r(:), 0, 0, j-1);end;return
Using V, we can transform directly between modal representations, usingun as the unknowns, and nodal representations, using u(ri) as the unknowns.The two representations are mathematically equivalent but computationallydi!erent and they each have certain advantages and disadvantages, dependingon the application at hand.
In what remains, we will focus on the nodal representations, which hasadvantages for more complex problems as we will see later. However, it isimportant to keep the alternative in mind as an optional representation.
3.2 Elementwise operations
With the development of a suitable local representation of the solution, weare now prepared to discuss the various elementwise operators needed to im-plement the algorithms from Chapter 2.
We begin with the two operators Mk and Sk. For the former, the entriesare given as
Mkij =
! xkr
xkl
!ki (x)!k
j (x) dx =hk
2
! 1
!1!i(r)!j(r) dr =
hk
2(!i, !j)I =
hk
2Mij ,
where the coe"cient in front is the Jacobian coming from Eq. (3.1) and M isthe mass matrix defined on the reference element I.
Recall that
!i(r) =Np"
n=1
(VT )!1in Pn!1(r),
from which we recover
Mij =! 1
!1
Np"
n=1
(VT )!1in Pn!1(r)
Np"
m=1
(VT )!1jmPm!1(r) dr
=Np"
n=1
Np"
m=1
(VT )!1in (VT )!1
jm(Pn!1, Pm!1)I =Np"
n=1
(VT )!1in (VT )!1
jn ,52 3 Making it work in one dimension
where the latter follows from orthonormality of Pn(r). Thus, we recover
Mk =hk
2M =
hk
2!VVT
"!1.
Let us now consider Sk, given as
Skij =
# xkr
xkl
!ki (x)
d!kj (x)dx
dx =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
Note in particular that no metric constant is introduced by the transformation.To realize a simple way to compute this, we define the di!erentiation matrix,Dr, with the entries
Dr,(i,j) =d!j
dr
$$$$ri
.
The motivation for this is found in Eq. (3.3), where we observe that Dr isthe operator that transforms point values, u(ri), to derivatives at these samepoints (e.g., u"
h = Druh).Consider now the product MDr, with entries
(MDr)(i,j) =Np%
n=1
MinDr,(n,j) =Np%
n=1
# 1
!1!i(r)!n(r)
d!j
dr
$$$$rn
dr
=# 1
!1!i(r)
Np%
n=1
d!j
dr
$$$$rn
!n(r) dr =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
In other words, we have the identity
MDr = S.
The entries of the di!erentiation matrix, Dr, can be found directly,
VT !(r) = P (r) ! VT d
dr!(r) =
d
drP (r),
leading to
VTDTr = (Vr)T , Vr,(i,j) =
dPj
dr
$$$$$ri
.
To compute the entries of Vr we use the identity
dPn
dr=
&n(n + 1)P (1,1)
n!1 (r),
where P (1,1)n!1 is the Jacobi polynomial (see Appendix A). This is implemented
in GradJacobiP.m to initialize the gradient of V evaluated at the grid points,r, as in GradVandermonde1D.m.
52 3 Making it work in one dimension
where the latter follows from orthonormality of Pn(r). Thus, we recover
Mk =hk
2M =
hk
2!VVT
"!1.
Let us now consider Sk, given as
Skij =
# xkr
xkl
!ki (x)
d!kj (x)dx
dx =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
Note in particular that no metric constant is introduced by the transformation.To realize a simple way to compute this, we define the di!erentiation matrix,Dr, with the entries
Dr,(i,j) =d!j
dr
$$$$ri
.
The motivation for this is found in Eq. (3.3), where we observe that Dr isthe operator that transforms point values, u(ri), to derivatives at these samepoints (e.g., u"
h = Druh).Consider now the product MDr, with entries
(MDr)(i,j) =Np%
n=1
MinDr,(n,j) =Np%
n=1
# 1
!1!i(r)!n(r)
d!j
dr
$$$$rn
dr
=# 1
!1!i(r)
Np%
n=1
d!j
dr
$$$$rn
!n(r) dr =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
In other words, we have the identity
MDr = S.
The entries of the di!erentiation matrix, Dr, can be found directly,
VT !(r) = P (r) ! VT d
dr!(r) =
d
drP (r),
leading to
VTDTr = (Vr)T , Vr,(i,j) =
dPj
dr
$$$$$ri
.
To compute the entries of Vr we use the identity
dPn
dr=
&n(n + 1)P (1,1)
n!1 (r),
where P (1,1)n!1 is the Jacobi polynomial (see Appendix A). This is implemented
in GradJacobiP.m to initialize the gradient of V evaluated at the grid points,r, as in GradVandermonde1D.m.
Consider now the local stiffness matrix
Monday, June 7, 2010
The local approximation
Define first the nodal differentiation matrix
Consider
52 3 Making it work in one dimension
where the latter follows from orthonormality of Pn(r). Thus, we recover
Mk =hk
2M =
hk
2!VVT
"!1.
Let us now consider Sk, given as
Skij =
# xkr
xkl
!ki (x)
d!kj (x)dx
dx =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
Note in particular that no metric constant is introduced by the transformation.To realize a simple way to compute this, we define the di!erentiation matrix,Dr, with the entries
Dr,(i,j) =d!j
dr
$$$$ri
.
The motivation for this is found in Eq. (3.3), where we observe that Dr isthe operator that transforms point values, u(ri), to derivatives at these samepoints (e.g., u"
h = Druh).Consider now the product MDr, with entries
(MDr)(i,j) =Np%
n=1
MinDr,(n,j) =Np%
n=1
# 1
!1!i(r)!n(r)
d!j
dr
$$$$rn
dr
=# 1
!1!i(r)
Np%
n=1
d!j
dr
$$$$rn
!n(r) dr =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
In other words, we have the identity
MDr = S.
The entries of the di!erentiation matrix, Dr, can be found directly,
VT !(r) = P (r) ! VT d
dr!(r) =
d
drP (r),
leading to
VTDTr = (Vr)T , Vr,(i,j) =
dPj
dr
$$$$$ri
.
To compute the entries of Vr we use the identity
dPn
dr=
&n(n + 1)P (1,1)
n!1 (r),
where P (1,1)n!1 is the Jacobi polynomial (see Appendix A). This is implemented
in GradJacobiP.m to initialize the gradient of V evaluated at the grid points,r, as in GradVandermonde1D.m.
52 3 Making it work in one dimension
where the latter follows from orthonormality of Pn(r). Thus, we recover
Mk =hk
2M =
hk
2!VVT
"!1.
Let us now consider Sk, given as
Skij =
# xkr
xkl
!ki (x)
d!kj (x)dx
dx =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
Note in particular that no metric constant is introduced by the transformation.To realize a simple way to compute this, we define the di!erentiation matrix,Dr, with the entries
Dr,(i,j) =d!j
dr
$$$$ri
.
The motivation for this is found in Eq. (3.3), where we observe that Dr isthe operator that transforms point values, u(ri), to derivatives at these samepoints (e.g., u"
h = Druh).Consider now the product MDr, with entries
(MDr)(i,j) =Np%
n=1
MinDr,(n,j) =Np%
n=1
# 1
!1!i(r)!n(r)
d!j
dr
$$$$rn
dr
=# 1
!1!i(r)
Np%
n=1
d!j
dr
$$$$rn
!n(r) dr =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
In other words, we have the identity
MDr = S.
The entries of the di!erentiation matrix, Dr, can be found directly,
VT !(r) = P (r) ! VT d
dr!(r) =
d
drP (r),
leading to
VTDTr = (Vr)T , Vr,(i,j) =
dPj
dr
$$$$$ri
.
To compute the entries of Vr we use the identity
dPn
dr=
&n(n + 1)P (1,1)
n!1 (r),
where P (1,1)n!1 is the Jacobi polynomial (see Appendix A). This is implemented
in GradJacobiP.m to initialize the gradient of V evaluated at the grid points,r, as in GradVandermonde1D.m.
52 3 Making it work in one dimension
where the latter follows from orthonormality of Pn(r). Thus, we recover
Mk =hk
2M =
hk
2!VVT
"!1.
Let us now consider Sk, given as
Skij =
# xkr
xkl
!ki (x)
d!kj (x)dx
dx =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
Note in particular that no metric constant is introduced by the transformation.To realize a simple way to compute this, we define the di!erentiation matrix,Dr, with the entries
Dr,(i,j) =d!j
dr
$$$$ri
.
The motivation for this is found in Eq. (3.3), where we observe that Dr isthe operator that transforms point values, u(ri), to derivatives at these samepoints (e.g., u"
h = Druh).Consider now the product MDr, with entries
(MDr)(i,j) =Np%
n=1
MinDr,(n,j) =Np%
n=1
# 1
!1!i(r)!n(r)
d!j
dr
$$$$rn
dr
=# 1
!1!i(r)
Np%
n=1
d!j
dr
$$$$rn
!n(r) dr =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
In other words, we have the identity
MDr = S.
The entries of the di!erentiation matrix, Dr, can be found directly,
VT !(r) = P (r) ! VT d
dr!(r) =
d
drP (r),
leading to
VTDTr = (Vr)T , Vr,(i,j) =
dPj
dr
$$$$$ri
.
To compute the entries of Vr we use the identity
dPn
dr=
&n(n + 1)P (1,1)
n!1 (r),
where P (1,1)n!1 is the Jacobi polynomial (see Appendix A). This is implemented
in GradJacobiP.m to initialize the gradient of V evaluated at the grid points,r, as in GradVandermonde1D.m.
The local approximation
Define first the nodal differentiation matrix
52 3 Making it work in one dimension
where the latter follows from orthonormality of Pn(r). Thus, we recover
Mk =hk
2M =
hk
2!VVT
"!1.
Let us now consider Sk, given as
Skij =
# xkr
xkl
!ki (x)
d!kj (x)dx
dx =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
Note in particular that no metric constant is introduced by the transformation.To realize a simple way to compute this, we define the di!erentiation matrix,Dr, with the entries
Dr,(i,j) =d!j
dr
$$$$ri
.
The motivation for this is found in Eq. (3.3), where we observe that Dr isthe operator that transforms point values, u(ri), to derivatives at these samepoints (e.g., u"
h = Druh).Consider now the product MDr, with entries
(MDr)(i,j) =Np%
n=1
MinDr,(n,j) =Np%
n=1
# 1
!1!i(r)!n(r)
d!j
dr
$$$$rn
dr
=# 1
!1!i(r)
Np%
n=1
d!j
dr
$$$$rn
!n(r) dr =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
In other words, we have the identity
MDr = S.
The entries of the di!erentiation matrix, Dr, can be found directly,
VT !(r) = P (r) ! VT d
dr!(r) =
d
drP (r),
leading to
VTDTr = (Vr)T , Vr,(i,j) =
dPj
dr
$$$$$ri
.
To compute the entries of Vr we use the identity
dPn
dr=
&n(n + 1)P (1,1)
n!1 (r),
where P (1,1)n!1 is the Jacobi polynomial (see Appendix A). This is implemented
in GradJacobiP.m to initialize the gradient of V evaluated at the grid points,r, as in GradVandermonde1D.m.
Consider
52 3 Making it work in one dimension
where the latter follows from orthonormality of Pn(r). Thus, we recover
Mk =hk
2M =
hk
2!VVT
"!1.
Let us now consider Sk, given as
Skij =
# xkr
xkl
!ki (x)
d!kj (x)dx
dx =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
Note in particular that no metric constant is introduced by the transformation.To realize a simple way to compute this, we define the di!erentiation matrix,Dr, with the entries
Dr,(i,j) =d!j
dr
$$$$ri
.
The motivation for this is found in Eq. (3.3), where we observe that Dr isthe operator that transforms point values, u(ri), to derivatives at these samepoints (e.g., u"
h = Druh).Consider now the product MDr, with entries
(MDr)(i,j) =Np%
n=1
MinDr,(n,j) =Np%
n=1
# 1
!1!i(r)!n(r)
d!j
dr
$$$$rn
dr
=# 1
!1!i(r)
Np%
n=1
d!j
dr
$$$$rn
!n(r) dr =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
In other words, we have the identity
MDr = S.
The entries of the di!erentiation matrix, Dr, can be found directly,
VT !(r) = P (r) ! VT d
dr!(r) =
d
drP (r),
leading to
VTDTr = (Vr)T , Vr,(i,j) =
dPj
dr
$$$$$ri
.
To compute the entries of Vr we use the identity
dPn
dr=
&n(n + 1)P (1,1)
n!1 (r),
where P (1,1)n!1 is the Jacobi polynomial (see Appendix A). This is implemented
in GradJacobiP.m to initialize the gradient of V evaluated at the grid points,r, as in GradVandermonde1D.m.
52 3 Making it work in one dimension
where the latter follows from orthonormality of Pn(r). Thus, we recover
Mk =hk
2M =
hk
2!VVT
"!1.
Let us now consider Sk, given as
Skij =
# xkr
xkl
!ki (x)
d!kj (x)dx
dx =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
Note in particular that no metric constant is introduced by the transformation.To realize a simple way to compute this, we define the di!erentiation matrix,Dr, with the entries
Dr,(i,j) =d!j
dr
$$$$ri
.
The motivation for this is found in Eq. (3.3), where we observe that Dr isthe operator that transforms point values, u(ri), to derivatives at these samepoints (e.g., u"
h = Druh).Consider now the product MDr, with entries
(MDr)(i,j) =Np%
n=1
MinDr,(n,j) =Np%
n=1
# 1
!1!i(r)!n(r)
d!j
dr
$$$$rn
dr
=# 1
!1!i(r)
Np%
n=1
d!j
dr
$$$$rn
!n(r) dr =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
In other words, we have the identity
MDr = S.
The entries of the di!erentiation matrix, Dr, can be found directly,
VT !(r) = P (r) ! VT d
dr!(r) =
d
drP (r),
leading to
VTDTr = (Vr)T , Vr,(i,j) =
dPj
dr
$$$$$ri
.
To compute the entries of Vr we use the identity
dPn
dr=
&n(n + 1)P (1,1)
n!1 (r),
where P (1,1)n!1 is the Jacobi polynomial (see Appendix A). This is implemented
in GradJacobiP.m to initialize the gradient of V evaluated at the grid points,r, as in GradVandermonde1D.m.
52 3 Making it work in one dimension
where the latter follows from orthonormality of Pn(r). Thus, we recover
Mk =hk
2M =
hk
2!VVT
"!1.
Let us now consider Sk, given as
Skij =
# xkr
xkl
!ki (x)
d!kj (x)dx
dx =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
Note in particular that no metric constant is introduced by the transformation.To realize a simple way to compute this, we define the di!erentiation matrix,Dr, with the entries
Dr,(i,j) =d!j
dr
$$$$ri
.
The motivation for this is found in Eq. (3.3), where we observe that Dr isthe operator that transforms point values, u(ri), to derivatives at these samepoints (e.g., u"
h = Druh).Consider now the product MDr, with entries
(MDr)(i,j) =Np%
n=1
MinDr,(n,j) =Np%
n=1
# 1
!1!i(r)!n(r)
d!j
dr
$$$$rn
dr
=# 1
!1!i(r)
Np%
n=1
d!j
dr
$$$$rn
!n(r) dr =# 1
!1!i(r)
d!j(r)dr
dr = Sij .
In other words, we have the identity
MDr = S.
The entries of the di!erentiation matrix, Dr, can be found directly,
VT !(r) = P (r) ! VT d
dr!(r) =
d
drP (r),
leading to
VTDTr = (Vr)T , Vr,(i,j) =
dPj
dr
$$$$$ri
.
To compute the entries of Vr we use the identity
dPn
dr=
&n(n + 1)P (1,1)
n!1 (r),
where P (1,1)n!1 is the Jacobi polynomial (see Appendix A). This is implemented
in GradJacobiP.m to initialize the gradient of V evaluated at the grid points,r, as in GradVandermonde1D.m.
Monday, June 7, 2010
The local approximation
A list of essential functions to do local operations are
Element-wise operations
A summary of useful scripts for the 1D element operations inMatlab
JacobiP Evaluate normalized Jacobi polynomialGradJacobiP Evaluate derivative of Jacobi polynomialJacobiGQ Compute Gauss points and weights for use in quadratureJacobiGL Compute the Legende-Gauss-Lobatto (LGL) nodesVandermonde1D Compute VGradVandermonde1D Compute Vr
Dmatrix1D Compute Dr
23 / 58
These are entirely problem independent
56 3 Making it work in one dimension
! 1
!1n · (uh ! u")!i(r) dr = (uh ! u")|rNp
eNp ! (uh ! u")|r1e1,
where ei is an Np long zero vector with a value of 1 in position i. Thus, it willbe convenient to define an operator that mimics this, as initialized in Lift1D.m,which returns the matrix, M!1E , where E is an Np " 2 array. Note that wehave again multiplied by M!1 for reasons that will become clear shortly.
Lift1D.m
function [LIFT] = Lift1D
% function [LIFT] = Lift1D% Purpose : Compute surface integral term in DG formulation
Globals1D;Emat = zeros(Np,Nfaces*Nfp);
% Define EmatEmat(1,1) = 1.0; Emat(Np,2) = 1.0;
% inv(mass matrix)*\s_n (L_i,L_j)_edge_nLIFT = V*(V’*Emat);return
3.3 Getting the grid together and computing the metric
With the local operators in place, we now discuss how to compute the metricassociated with the grid and how to assemble the global grid. We begin byassuming that we provide two sets of data:
• A row vector, V X, of Nv (= K + 1) vertex coordinates. These representthe end points of the intervals or elements. They do not need to be orderedin any particular way although they must be distinct. These vertices arenumbered consecutively from 1 to Nv.
• An integer matrix, EToV, of size K " 2, containing in each row k, thenumbers of the two vertices forming element k. It is the responsibility ofthe user/grid generator to ensure that the elements are meaningful; that is,the grid should consist of one connected set of nonoverlapping elements. Itis furthermore assumed that the nodes are ordered so elements are definedcounter-clockwise (in one dimension this reflects a left-to-right ordering ofthe elements).
These two pieces of information are provided by any grid generator, or,alternatively, a user should provide them to specify the grid for a problem(see Appendix B for a further discussion of this).
We will also need a boundary operator like
Monday, June 7, 2010
The local approximation - example
Consider
54 3 Making it work in one dimension
!!0.50 0.50!0.50 0.50
",
#!1.50 2.00 !0.50!0.50 0.00 0.50
0.50 !2.00 1.50
$,
%
&&&'
!5.00 6.76 !2.67 1.41 !0.50!1.24 0.00 1.75 !0.76 0.26
0.38 !1.34 0.00 1.34 !0.38!0.26 0.76 !1.75 0.00 1.24
0.50 !1.41 2.67 !6.76 5.00
(
)))*,
%
&&&&&&&&&&'
!18.00 24.35 !9.75 5.54 !3.66 2.59 !1.87 1.28 !0.50!4.09 0.00 5.79 !2.70 1.67 !1.15 0.82 !0.56 0.22
0.99 !3.49 0.00 3.58 !1.72 1.08 !0.74 0.49 !0.19!0.44 1.29 !2.83 0.00 2.85 !1.38 0.86 !0.55 0.21
0.27 !0.74 1.27 !2.66 0.00 2.66 !1.27 0.74 !0.27!0.21 0.55 !0.86 1.38 !2.85 0.00 2.83 !1.29 0.44
0.19 !0.49 0.74 !1.08 1.72 !3.58 0.00 3.49 !0.99!0.22 0.56 !0.82 1.15 !1.67 2.70 !5.79 0.00 4.09
0.50 !1.28 1.87 !2.59 3.66 !5.54 9.75 !24.35 18.00
(
))))))))))*
Fig. 3.4. Examples of di!erentiation matrices, Dr, for orders N = 1, 2, 4 in the toprow and N = 8 in the bottom row.
Examples of di!erentiation matrices for di!erent orders of approximationare shown in Fig. 3.4. A few observations are worth making regarding theseoperators. Inspection of Fig. 3.4 shows that
Dr,(i,j) = !Dr,(N!i,N!j),
known as skew-antisymmetric. This is a general property as long as ri = rN!i
(i.e., the interpolation points are symmetric around r = 0) and this can be ex-plored to compute fast matrix-vector products [291]. Furthermore, we noticethat each row-sum is exactly zero, reflecting that the derivative of a constantis zero. This also implies that Dr has at least one zero eigenvalue. A morecareful analysis will in fact show that Dr is nilpotent; that is, it has N + 1zero eigenvalues and only one eigenvector. Thus, Dr is not diagonizable butis similar only to a rank N + 1 Jordan block. These and other details arediscussed in depth in [159].
Example 3.2. To illustrate the accuracy of derivatives computed using Dr, letus consider a few examples.
We first consider the following analytic function
u(x) = exp(sin(!x)), x " [!1, 1],
and compute the errors of the computed derivatives as a function of the orderof the approximation, N . In Fig. 3.5 we show the L2-error as a function ofN and observe an exponentially fast decay of the error, known as spectralconvergence [159]. This is a clear indication of the strength of very high-ordermethods for the approximation of smooth functions.
3.2 Elementwise operations 55
0 16 32 48 6410!14
10!12
10!10
10!8
10!6
10!4
10!2
100
N
||ux !
D u h
||
101 102 10310!7
10!6
10!5
10!4
10!3
10!2
10!1
100
N
||ux !
D u h
||
u(1)
u(2)
u(3)
N!1/2
N!3/2
N!5/2
Fig. 3.5. On the left, we show the global L2-error for the derivative of the smoothfunction in Example 3.2, highlighting exponential convergence. On the right, we showthe global L2-error for the derivative of the functions in Example 3.2, illustratingalgebraic convergence closely connected to the regularity of the function.
To o!er a more complete picture of the accuracy of the discrete derivative,let us also consider the sequence of functions
u(0)(x) =!! cos(!x), !1 " x < 0
cos(!x), 0 " x " 1,du(i+1)
dx= u(i), i = 0, 1, 2, 3 . . . .
Note that u(i+1) # Ci; that is, u(1) is continuous but its derivative is discon-tinuous. In Fig. 3.5 we show the L2-error for the discrete derivative of u(i) fori = 1, 2, 3. We now observe a more moderate convergence rate as
""""du(i)
dx!Dru
(i)h
""""!
$ N1/2!i.
If we recall that #u(i)
x
$
n= u(i!1)
n $ 1ni
then a rough estimate follows directly since""""
du(i)
dx!Dru
(i)h
""""2
!
""%
n=N+1
1n2i
" 1N2i!1
,
confirming the close connection between accuracy and smoothness indicatedin Fig. 3.5. We shall return to a more detailed discussion of these aspects inChapter 4 where we shall also realize that things are a bit more complex thanthe above result indicates.
To complete the discussion of the local operators, we consider the remain-ing local operator that is responsible for extracting the surface terms of theform
3.2 Elementwise operations 55
0 16 32 48 6410!14
10!12
10!10
10!8
10!6
10!4
10!2
100
N
||ux !
D u h
||
101 102 10310!7
10!6
10!5
10!4
10!3
10!2
10!1
100
N
||ux !
D u h
||
u(1)
u(2)
u(3)
N!1/2
N!3/2
N!5/2
Fig. 3.5. On the left, we show the global L2-error for the derivative of the smoothfunction in Example 3.2, highlighting exponential convergence. On the right, we showthe global L2-error for the derivative of the functions in Example 3.2, illustratingalgebraic convergence closely connected to the regularity of the function.
To o!er a more complete picture of the accuracy of the discrete derivative,let us also consider the sequence of functions
u(0)(x) =!! cos(!x), !1 " x < 0
cos(!x), 0 " x " 1,du(i+1)
dx= u(i), i = 0, 1, 2, 3 . . . .
Note that u(i+1) # Ci; that is, u(1) is continuous but its derivative is discon-tinuous. In Fig. 3.5 we show the L2-error for the discrete derivative of u(i) fori = 1, 2, 3. We now observe a more moderate convergence rate as
""""du(i)
dx!Dru
(i)h
""""!
$ N1/2!i.
If we recall that #u(i)
x
$
n= u(i!1)
n $ 1ni
then a rough estimate follows directly since""""
du(i)
dx!Dru
(i)h
""""2
!
""%
n=N+1
1n2i
" 1N2i!1
,
confirming the close connection between accuracy and smoothness indicatedin Fig. 3.5. We shall return to a more detailed discussion of these aspects inChapter 4 where we shall also realize that things are a bit more complex thanthe above result indicates.
To complete the discussion of the local operators, we consider the remain-ing local operator that is responsible for extracting the surface terms of theform
Spectral convergence
3.2 Elementwise operations 55
0 16 32 48 6410!14
10!12
10!10
10!8
10!6
10!4
10!2
100
N
||ux !
D u h
||
101 102 10310!7
10!6
10!5
10!4
10!3
10!2
10!1
100
N
||ux !
D u h
||
u(1)
u(2)
u(3)
N!1/2
N!3/2
N!5/2
Fig. 3.5. On the left, we show the global L2-error for the derivative of the smoothfunction in Example 3.2, highlighting exponential convergence. On the right, we showthe global L2-error for the derivative of the functions in Example 3.2, illustratingalgebraic convergence closely connected to the regularity of the function.
To o!er a more complete picture of the accuracy of the discrete derivative,let us also consider the sequence of functions
u(0)(x) =!! cos(!x), !1 " x < 0
cos(!x), 0 " x " 1,du(i+1)
dx= u(i), i = 0, 1, 2, 3 . . . .
Note that u(i+1) # Ci; that is, u(1) is continuous but its derivative is discon-tinuous. In Fig. 3.5 we show the L2-error for the discrete derivative of u(i) fori = 1, 2, 3. We now observe a more moderate convergence rate as
""""du(i)
dx!Dru
(i)h
""""!
$ N1/2!i.
If we recall that #u(i)
x
$
n= u(i!1)
n $ 1ni
then a rough estimate follows directly since""""
du(i)
dx!Dru
(i)h
""""2
!
""%
n=N+1
1n2i
" 1N2i!1
,
confirming the close connection between accuracy and smoothness indicatedin Fig. 3.5. We shall return to a more detailed discussion of these aspects inChapter 4 where we shall also realize that things are a bit more complex thanthe above result indicates.
To complete the discussion of the local operators, we consider the remain-ing local operator that is responsible for extracting the surface terms of theform
Accuracy ~ regularity
The local approximation - example
Consider
54 3 Making it work in one dimension
!!0.50 0.50!0.50 0.50
",
#!1.50 2.00 !0.50!0.50 0.00 0.50
0.50 !2.00 1.50
$,
%
&&&'
!5.00 6.76 !2.67 1.41 !0.50!1.24 0.00 1.75 !0.76 0.26
0.38 !1.34 0.00 1.34 !0.38!0.26 0.76 !1.75 0.00 1.24
0.50 !1.41 2.67 !6.76 5.00
(
)))*,
%
&&&&&&&&&&'
!18.00 24.35 !9.75 5.54 !3.66 2.59 !1.87 1.28 !0.50!4.09 0.00 5.79 !2.70 1.67 !1.15 0.82 !0.56 0.22
0.99 !3.49 0.00 3.58 !1.72 1.08 !0.74 0.49 !0.19!0.44 1.29 !2.83 0.00 2.85 !1.38 0.86 !0.55 0.21
0.27 !0.74 1.27 !2.66 0.00 2.66 !1.27 0.74 !0.27!0.21 0.55 !0.86 1.38 !2.85 0.00 2.83 !1.29 0.44
0.19 !0.49 0.74 !1.08 1.72 !3.58 0.00 3.49 !0.99!0.22 0.56 !0.82 1.15 !1.67 2.70 !5.79 0.00 4.09
0.50 !1.28 1.87 !2.59 3.66 !5.54 9.75 !24.35 18.00
(
))))))))))*
Fig. 3.4. Examples of di!erentiation matrices, Dr, for orders N = 1, 2, 4 in the toprow and N = 8 in the bottom row.
Examples of di!erentiation matrices for di!erent orders of approximationare shown in Fig. 3.4. A few observations are worth making regarding theseoperators. Inspection of Fig. 3.4 shows that
Dr,(i,j) = !Dr,(N!i,N!j),
known as skew-antisymmetric. This is a general property as long as ri = rN!i
(i.e., the interpolation points are symmetric around r = 0) and this can be ex-plored to compute fast matrix-vector products [291]. Furthermore, we noticethat each row-sum is exactly zero, reflecting that the derivative of a constantis zero. This also implies that Dr has at least one zero eigenvalue. A morecareful analysis will in fact show that Dr is nilpotent; that is, it has N + 1zero eigenvalues and only one eigenvector. Thus, Dr is not diagonizable butis similar only to a rank N + 1 Jordan block. These and other details arediscussed in depth in [159].
Example 3.2. To illustrate the accuracy of derivatives computed using Dr, letus consider a few examples.
We first consider the following analytic function
u(x) = exp(sin(!x)), x " [!1, 1],
and compute the errors of the computed derivatives as a function of the orderof the approximation, N . In Fig. 3.5 we show the L2-error as a function ofN and observe an exponentially fast decay of the error, known as spectralconvergence [159]. This is a clear indication of the strength of very high-ordermethods for the approximation of smooth functions.
3.2 Elementwise operations 55
0 16 32 48 6410!14
10!12
10!10
10!8
10!6
10!4
10!2
100
N
||ux !
D u h
||
101 102 10310!7
10!6
10!5
10!4
10!3
10!2
10!1
100
N
||ux !
D u h
||
u(1)
u(2)
u(3)
N!1/2
N!3/2
N!5/2
Fig. 3.5. On the left, we show the global L2-error for the derivative of the smoothfunction in Example 3.2, highlighting exponential convergence. On the right, we showthe global L2-error for the derivative of the functions in Example 3.2, illustratingalgebraic convergence closely connected to the regularity of the function.
To o!er a more complete picture of the accuracy of the discrete derivative,let us also consider the sequence of functions
u(0)(x) =!! cos(!x), !1 " x < 0
cos(!x), 0 " x " 1,du(i+1)
dx= u(i), i = 0, 1, 2, 3 . . . .
Note that u(i+1) # Ci; that is, u(1) is continuous but its derivative is discon-tinuous. In Fig. 3.5 we show the L2-error for the discrete derivative of u(i) fori = 1, 2, 3. We now observe a more moderate convergence rate as
""""du(i)
dx!Dru
(i)h
""""!
$ N1/2!i.
If we recall that #u(i)
x
$
n= u(i!1)
n $ 1ni
then a rough estimate follows directly since""""
du(i)
dx!Dru
(i)h
""""2
!
""%
n=N+1
1n2i
" 1N2i!1
,
confirming the close connection between accuracy and smoothness indicatedin Fig. 3.5. We shall return to a more detailed discussion of these aspects inChapter 4 where we shall also realize that things are a bit more complex thanthe above result indicates.
To complete the discussion of the local operators, we consider the remain-ing local operator that is responsible for extracting the surface terms of theform
3.2 Elementwise operations 55
0 16 32 48 6410!14
10!12
10!10
10!8
10!6
10!4
10!2
100
N
||ux !
D u h
||
101 102 10310!7
10!6
10!5
10!4
10!3
10!2
10!1
100
N
||ux !
D u h
||
u(1)
u(2)
u(3)
N!1/2
N!3/2
N!5/2
Fig. 3.5. On the left, we show the global L2-error for the derivative of the smoothfunction in Example 3.2, highlighting exponential convergence. On the right, we showthe global L2-error for the derivative of the functions in Example 3.2, illustratingalgebraic convergence closely connected to the regularity of the function.
To o!er a more complete picture of the accuracy of the discrete derivative,let us also consider the sequence of functions
u(0)(x) =!! cos(!x), !1 " x < 0
cos(!x), 0 " x " 1,du(i+1)
dx= u(i), i = 0, 1, 2, 3 . . . .
Note that u(i+1) # Ci; that is, u(1) is continuous but its derivative is discon-tinuous. In Fig. 3.5 we show the L2-error for the discrete derivative of u(i) fori = 1, 2, 3. We now observe a more moderate convergence rate as
""""du(i)
dx!Dru
(i)h
""""!
$ N1/2!i.
If we recall that #u(i)
x
$
n= u(i!1)
n $ 1ni
then a rough estimate follows directly since""""
du(i)
dx!Dru
(i)h
""""2
!
""%
n=N+1
1n2i
" 1N2i!1
,
confirming the close connection between accuracy and smoothness indicatedin Fig. 3.5. We shall return to a more detailed discussion of these aspects inChapter 4 where we shall also realize that things are a bit more complex thanthe above result indicates.
To complete the discussion of the local operators, we consider the remain-ing local operator that is responsible for extracting the surface terms of theform
Spectral convergence
3.2 Elementwise operations 55
0 16 32 48 6410!14
10!12
10!10
10!8
10!6
10!4
10!2
100
N
||ux !
D u h
||
101 102 10310!7
10!6
10!5
10!4
10!3
10!2
10!1
100
N
||ux !
D u h
||
u(1)
u(2)
u(3)
N!1/2
N!3/2
N!5/2
Fig. 3.5. On the left, we show the global L2-error for the derivative of the smoothfunction in Example 3.2, highlighting exponential convergence. On the right, we showthe global L2-error for the derivative of the functions in Example 3.2, illustratingalgebraic convergence closely connected to the regularity of the function.
To o!er a more complete picture of the accuracy of the discrete derivative,let us also consider the sequence of functions
u(0)(x) =!! cos(!x), !1 " x < 0
cos(!x), 0 " x " 1,du(i+1)
dx= u(i), i = 0, 1, 2, 3 . . . .
Note that u(i+1) # Ci; that is, u(1) is continuous but its derivative is discon-tinuous. In Fig. 3.5 we show the L2-error for the discrete derivative of u(i) fori = 1, 2, 3. We now observe a more moderate convergence rate as
""""du(i)
dx!Dru
(i)h
""""!
$ N1/2!i.
If we recall that #u(i)
x
$
n= u(i!1)
n $ 1ni
then a rough estimate follows directly since""""
du(i)
dx!Dru
(i)h
""""2
!
""%
n=N+1
1n2i
" 1N2i!1
,
confirming the close connection between accuracy and smoothness indicatedin Fig. 3.5. We shall return to a more detailed discussion of these aspects inChapter 4 where we shall also realize that things are a bit more complex thanthe above result indicates.
To complete the discussion of the local operators, we consider the remain-ing local operator that is responsible for extracting the surface terms of theform
3.2 Elementwise operations 55
0 16 32 48 6410!14
10!12
10!10
10!8
10!6
10!4
10!2
100
N
||ux !
D u h
||
101 102 10310!7
10!6
10!5
10!4
10!3
10!2
10!1
100
N
||ux !
D u h
||
u(1)
u(2)
u(3)
N!1/2
N!3/2
N!5/2
Fig. 3.5. On the left, we show the global L2-error for the derivative of the smoothfunction in Example 3.2, highlighting exponential convergence. On the right, we showthe global L2-error for the derivative of the functions in Example 3.2, illustratingalgebraic convergence closely connected to the regularity of the function.
To o!er a more complete picture of the accuracy of the discrete derivative,let us also consider the sequence of functions
u(0)(x) =!! cos(!x), !1 " x < 0
cos(!x), 0 " x " 1,du(i+1)
dx= u(i), i = 0, 1, 2, 3 . . . .
Note that u(i+1) # Ci; that is, u(1) is continuous but its derivative is discon-tinuous. In Fig. 3.5 we show the L2-error for the discrete derivative of u(i) fori = 1, 2, 3. We now observe a more moderate convergence rate as
""""du(i)
dx!Dru
(i)h
""""!
$ N1/2!i.
If we recall that #u(i)
x
$
n= u(i!1)
n $ 1ni
then a rough estimate follows directly since""""
du(i)
dx!Dru
(i)h
""""2
!
""%
n=N+1
1n2i
" 1N2i!1
,
confirming the close connection between accuracy and smoothness indicatedin Fig. 3.5. We shall return to a more detailed discussion of these aspects inChapter 4 where we shall also realize that things are a bit more complex thanthe above result indicates.
To complete the discussion of the local operators, we consider the remain-ing local operator that is responsible for extracting the surface terms of theform
Accuracy ~ regularity
Monday, June 7, 2010
Getting the grid and metric together
From a grid generator we will get
‣ A numbered set of vertices - VX‣ A set of elements, each of two vertices - EToV(K,2)
From this we need
‣ Grid ‣ Element sizes‣ Element normals and Jacobians‣ Element connectivity element-to-element edge-to-edge
Monday, June 7, 2010
Getting the grid and metric together
We first compute the grid
3.3 Getting the grid together and computing the metric 57
If we specify the order, N , of the local approximations, we can computethe corresponding LGL nodes, r, in the reference element, I, and map themto each of the physical elements using the a!ne mapping, Eq. (3.1), as
>> va = EToV(:,1)’; vb = EToV(:,2)’;>> x = ones(Np,1)*VX(va) + 0.5*(r+1)*(VX(vb)-VX(va));
The array, x, is Np ! K and contains the physical coordinates of the gridpoints. The metric of the mapping is
rx =1xr
=1J
=1
Drx,
as implemented in GeometricFactors1D.m. Here, J represents the local trans-formation Jacobian.
GeometricFactors1D.m
function [rx,J] = GeometricFactors1D(x,Dr)
% function [rx,J] = GeometricFactors1D(x,Dr)% Purpose : Compute the metric elements for the local mappings% of the 1D elements
xr = Dr*x; J = xr; rx = 1./J;return
To readily access the elements along the faces (i.e., the two end points ofeach element), we form a small array with these two indices:
>> fmask1 = find( abs(r+1) < NODETOL)’;>> fmask2 = find( abs(r-1) < NODETOL)’;>> Fmask = [fmask1;fmask2]’;>> Fx = x(Fmask(:), :);
where the latter, Fx, is a 2 ! K array containing the physical coordinates ofthe edge nodes.
We now build the final piece of local information: the local outwardpointing normals and the inverse Jacobian at the interfaces, as done inNormals1D.m.
Normals1D.m
function [nx] = Normals1D
% function [nx] = Normals1D% Purpose : Compute outward pointing normals at elements faces
Globals1D;nx = zeros(Nfp*Nfaces, K);
and the local metric
The local normals are simple: nl = −1, nr = 1
Getting the grid and metric together
We first compute the grid
3.3 Getting the grid together and computing the metric 57
If we specify the order, N , of the local approximations, we can computethe corresponding LGL nodes, r, in the reference element, I, and map themto each of the physical elements using the a!ne mapping, Eq. (3.1), as
>> va = EToV(:,1)’; vb = EToV(:,2)’;>> x = ones(Np,1)*VX(va) + 0.5*(r+1)*(VX(vb)-VX(va));
The array, x, is Np ! K and contains the physical coordinates of the gridpoints. The metric of the mapping is
rx =1xr
=1J
=1
Drx,
as implemented in GeometricFactors1D.m. Here, J represents the local trans-formation Jacobian.
GeometricFactors1D.m
function [rx,J] = GeometricFactors1D(x,Dr)
% function [rx,J] = GeometricFactors1D(x,Dr)% Purpose : Compute the metric elements for the local mappings% of the 1D elements
xr = Dr*x; J = xr; rx = 1./J;return
To readily access the elements along the faces (i.e., the two end points ofeach element), we form a small array with these two indices:
>> fmask1 = find( abs(r+1) < NODETOL)’;>> fmask2 = find( abs(r-1) < NODETOL)’;>> Fmask = [fmask1;fmask2]’;>> Fx = x(Fmask(:), :);
where the latter, Fx, is a 2 ! K array containing the physical coordinates ofthe edge nodes.
We now build the final piece of local information: the local outwardpointing normals and the inverse Jacobian at the interfaces, as done inNormals1D.m.
Normals1D.m
function [nx] = Normals1D
% function [nx] = Normals1D% Purpose : Compute outward pointing normals at elements faces
Globals1D;nx = zeros(Nfp*Nfaces, K);
3.3 Getting the grid together and computing the metric 57
If we specify the order, N , of the local approximations, we can computethe corresponding LGL nodes, r, in the reference element, I, and map themto each of the physical elements using the a!ne mapping, Eq. (3.1), as
>> va = EToV(:,1)’; vb = EToV(:,2)’;>> x = ones(Np,1)*VX(va) + 0.5*(r+1)*(VX(vb)-VX(va));
The array, x, is Np ! K and contains the physical coordinates of the gridpoints. The metric of the mapping is
rx =1xr
=1J
=1
Drx,
as implemented in GeometricFactors1D.m. Here, J represents the local trans-formation Jacobian.
GeometricFactors1D.m
function [rx,J] = GeometricFactors1D(x,Dr)
% function [rx,J] = GeometricFactors1D(x,Dr)% Purpose : Compute the metric elements for the local mappings% of the 1D elements
xr = Dr*x; J = xr; rx = 1./J;return
To readily access the elements along the faces (i.e., the two end points ofeach element), we form a small array with these two indices:
>> fmask1 = find( abs(r+1) < NODETOL)’;>> fmask2 = find( abs(r-1) < NODETOL)’;>> Fmask = [fmask1;fmask2]’;>> Fx = x(Fmask(:), :);
where the latter, Fx, is a 2 ! K array containing the physical coordinates ofthe edge nodes.
We now build the final piece of local information: the local outwardpointing normals and the inverse Jacobian at the interfaces, as done inNormals1D.m.
Normals1D.m
function [nx] = Normals1D
% function [nx] = Normals1D% Purpose : Compute outward pointing normals at elements faces
Globals1D;nx = zeros(Nfp*Nfaces, K);
and the local metric
The local normals are simple: nl = −1, nr = 1
Putting the pieces together in a code
Mesh tables:
EToV = [1 2; 2 3];VX = [0 0.5 1.0];
Global node numbering defined:
vmapM = [1 4 5 8];vmapP = [1 5 4 8];
Face node numbering defined:
mapM = [1 2 3 4];mapP = [1 3 2 4];mapI = [1];mapO = [4];
Outward point face normals:
nx = [-1 1; -1 1];
44 / 58
Putting the pieces together in a code
Mesh tables:
EToV = [1 2; 2 3];VX = [0 0.5 1.0];
Global node numbering defined:
vmapM = [1 4 5 8];vmapP = [1 5 4 8];
Face node numbering defined:
mapM = [1 2 3 4];mapP = [1 3 2 4];mapI = [1];mapO = [4];
Outward point face normals:
nx = [-1 1; -1 1];
44 / 58
Monday, June 7, 2010
Getting the grid and metric together
We shall understand how to get the connectivitythrough an example - K=4, 5 vertices
58 3 Making it work in one dimension
% Define outward normalsnx(1, :) = -1.0; nx(2, :) = 1.0;return
>> Fscale = 1./(J(Fmask,:));
So far, all the elements are disconnected and we need to obtain informationabout which vertex on each element connects to which vertex on neighboringelements to form the global grid. If a vertex is not connected, it will be assumedto be a boundary with the possible need to impose a boundary condition.
We form an integer matrix, FToV, which is (2K) ! Nv. The 2K reflectsthe total number of faces, each of which have been assigned a global number.This supplies a map between this global face numbering, running from 1 to2K, and the global vertex numbering, running from 1 to Nv.
To recover information about which global face connects to which globalface, we compute the companion matrix
FToF = (FToV)(FToV)T ,
which is a (2K) ! (2K) integer matrix with ones on the diagonal, reflectingthat each face connects to itself. Thus, a pure connection matrix is found byremoving the self-reference
FToF = (FToV)(FToV)T " I.
In each row, corresponding to a particular face, one now finds either a value of1 in one column corresponding to the connecting face or only zeros, reflectingthat the particular face is an unconnected face (i.e., a boundary point).
Let us consider a simple example to make this approach clearer.
Example 3.3. Assume that we have read in the two arrays
V X =!0.0, 0.5, 1.5, 3.0, 2.5
", EToV =
#
$$%
1 22 33 55 4
&
''( ,
representing a grid with K = 4 elements and Nv = 5 vertices. We form
FToV =
#
$$$$$$$$$$%
1 0 0 0 00 1 0 0 00 1 0 0 00 0 1 0 00 0 1 0 00 0 0 0 10 0 0 1 00 0 0 0 1
&
''''''''''(
.
Form the companion matrix - FaceToVertex
58 3 Making it work in one dimension
% Define outward normalsnx(1, :) = -1.0; nx(2, :) = 1.0;return
>> Fscale = 1./(J(Fmask,:));
So far, all the elements are disconnected and we need to obtain informationabout which vertex on each element connects to which vertex on neighboringelements to form the global grid. If a vertex is not connected, it will be assumedto be a boundary with the possible need to impose a boundary condition.
We form an integer matrix, FToV, which is (2K) ! Nv. The 2K reflectsthe total number of faces, each of which have been assigned a global number.This supplies a map between this global face numbering, running from 1 to2K, and the global vertex numbering, running from 1 to Nv.
To recover information about which global face connects to which globalface, we compute the companion matrix
FToF = (FToV)(FToV)T ,
which is a (2K) ! (2K) integer matrix with ones on the diagonal, reflectingthat each face connects to itself. Thus, a pure connection matrix is found byremoving the self-reference
FToF = (FToV)(FToV)T " I.
In each row, corresponding to a particular face, one now finds either a value of1 in one column corresponding to the connecting face or only zeros, reflectingthat the particular face is an unconnected face (i.e., a boundary point).
Let us consider a simple example to make this approach clearer.
Example 3.3. Assume that we have read in the two arrays
V X =!0.0, 0.5, 1.5, 3.0, 2.5
", EToV =
#
$$%
1 22 33 55 4
&
''( ,
representing a grid with K = 4 elements and Nv = 5 vertices. We form
FToV =
#
$$$$$$$$$$%
1 0 0 0 00 1 0 0 00 1 0 0 00 0 1 0 00 0 1 0 00 0 0 0 10 0 0 1 00 0 0 0 1
&
''''''''''(
.
Faces
Element 1
Element 2
Element 3Element 4
Monday, June 7, 2010
Getting the grid and metric together3.3 Getting the grid together and computing the metric 59
to connect the 2K local faces to the Nv globally numbered faces.The connectivity can then be recovered from the operator
(FToV)(FToV)T =
!
""""""""""#
1 0 0 0 0 0 0 00 1 1 0 0 0 0 00 1 1 0 0 0 0 00 0 0 1 1 0 0 00 0 0 1 1 0 0 00 0 0 0 0 1 0 10 0 0 0 0 0 1 00 0 0 0 0 1 0 1
$
%%%%%%%%%%&
.
The identity diagonal reflects the self-reference. Furthermore, we see thatbeyond the diagonal, rows 1 and 7 indicate that these vertices have no con-nection, and are assumed to reflect the outer boundaries of the computationaldomain. Finally, we see that face 2 connects to face 3 and face 3 to face 2, inagreement with the basic list of input data.
From the FToF-matrix we can immediately recover the element-to-element,EToE, and element-to-face, EToF, information as each segment of two rows orcolumns corresponds to one element and its two faces. The whole procedureis implemented in Connect1D.m.
Connect1D.m
function [EToE, EToF] = Connect1D(EToV)
% function [EToE, EToF] = Connect1D(EToV)% Purpose : Build global connectivity arrays for 1D grid based% on standard EToV input array from grid generator
Nfaces = 2;% Find number of elements and verticesK = size(EToV,1); TotalFaces = Nfaces*K; Nv = K+1;
% List of local face to local vertex connectionsvn = [1,2];
% Build global face to node sparse arraySpFToV = spalloc(TotalFaces, Nv, 2*TotalFaces);sk = 1;for k=1:Kfor face=1:Nfaces
SpFToV( sk, EToV(k, vn(face))) = 1;sk = sk+1;
endend
Consider
From this all connectivity information can be found
Edge 1 only self connects -- it is a boundary ! Edge 2 connects to Edge 3 Edge 3 connects to Edge 2 etc
Element 1
Element 2
Element 3
Element 4
Monday, June 7, 2010
Getting the grid and metric together
This is used to built index maps
‣ vmapM -- such that u(vmapM) = interior u‣ vmapP -- such that u(vmapP) = exterior u‣ vmapB -- such that u(vmapB) = boundary u
Putting the pieces together in a code
Mesh tables:
EToV = [1 2; 2 3];VX = [0 0.5 1.0];
Global node numbering defined:
vmapM = [1 4 5 8];vmapP = [1 5 4 8];
Face node numbering defined:
mapM = [1 2 3 4];mapP = [1 3 2 4];mapI = [1];mapO = [4];
Outward point face normals:
nx = [-1 1; -1 1];
44 / 58
Monday, June 7, 2010
The global scheme
To connect all elements and compute the metric we need routines like
Element-wise operations
A summary of scripts needed for preprocessing for 1D DG-FEMcomputations in Matlab
Globals1D Define list of globals variablesMeshGen1D Generates a simple equidistant grid with K elementsStartup1D Main script for pre-processingBuildMaps1D Automatically create index maps from conn. and bc tablesNormals1D Compute outward pointing normals at elements facesConnect1D Build global connectivity arrays for 1D gridGeometricFactors1D Compute metrics of local mappingsLift1D Compute surface integral term in DG formulation
! Components listed here are general and reusable for each solver - enables fastproto-typing.
! MeshGen1D can be susbstituted with your own preferred meshinterface/generator.
40 / 58
These are also problem independent !
Monday, June 7, 2010
Putting it together - an example
Consider again the wave equation
64 3 Making it work in one dimension
Table 3.2. Coe!cients for the low-storage five-stage fourth-order ERK method(LSERK).
i ai bi ci
1 014329971744779575080441755
0
2 ! 5673018057731357537059087
516183667771713612068292357
14329971744779575080441755
3 !24042679903932016746695238
17201463215492090206949498
25262693414296820363962896
4 !35509186866462091501179385
31345643535374481467310338
20063455193173224310063776
5 !1275806237668842570457699
227782119143714882151754819
28023216131382924317926251
alternative to this is a low-storage version [46] of the fourth-order method(LSERK) of the form
p(0) = un,
i ! [1, . . . , 5] :
!k(i) = aik
(i!1) + !tLh
"p(i!1), tn + ci!t
#,
p(i) = p(i!1) + bik(i),
un+1h = p(5). (3.5)
The coe!cients needed in the LSERK are given in Table 3.2. The main di"er-ence here is that only one additional storage level is required, thus reducingthe memory usage significantly. On the other hand, this comes at the price ofan additional function evaluation, as the low-storage version has five stages.
At first, it would seem that the additional stage makes the low-storageapproach less interesting due to the added cost. However, as we will discussin more detail in Chapter 4, the added cost in the low-storage RK is o"set byallowing a larger stable timestep, !t.
It should be emphasized that these methods are not exclusive and manyalternatives exist [40, 143, 144]. In particular, for strongly nonlinear problems,the added nonlinear stability of strong stability-preserving methods may beadvantageous, as we will discuss further in Chapter 5.
3.5 Putting it all together
Let us now return to the simple linear wave equation
"u
"t+ 2#
"u
"x= 0, x ! [0, 2#],
u(x, 0) = sin(x),u(0, t) = " sin(2#t).
Following the recipe we have developed, we get
3.5 Putting it all together 65
It is easy to see that the exact solution is
u(x, t) = sin(x ! 2!t).
Following the developments in Chapter 2, we assume the local solution is wellapproximated as
ukh(x, t) =
Np!
i=1
ukh(xk
i , t)"ki (x),
and the combination of these local solutions as the approximation to the globalsolution. This yields the local semidiscrete scheme
Mk dukh
dt+ 2!Suk
h ="!k(x)(2!uk
h ! (2!u)!)#xk
r
xkl
,
orduk
h
dt+ 2!(Mk)"1Suk
h = (Mk)"1"!k(x)(2!uk
h ! (2!u)!)#xk
r
xkl
,
The analysis in Example 2.3 shows that choosing the flux as
(2!u)! = 2!u + 2!1 ! #
2[[u]], 0 " # " 1
results in a stable scheme.An implementation of this is shown in AdvecRHS1D.m. The parameter #
can be used to adjust the flux; for example # = 1 is a central flux and # = 0reflects an upwind flux.
AdvecRHS1D.m
function [rhsu] = AdvecRHS1D(u,time, a)
% function [rhsu] = AdvecRHS1D(u,time)% Purpose : Evaluate RHS flux in 1D advection
Globals1D;
% form field differences at facesalpha=1;du = zeros(Nfp*Nfaces,K);du(:) = (u(vmapM)-u(vmapP)).*(a*nx(:)-(1-alpha)*abs(a*nx(:)))/2;
% impose boundary condition at x=0uin = -sin(a*time);du (mapI) = (u(vmapI)- uin ).*(a*nx(mapI)-(1-alpha)*abs(a*nx(mapI)))/2;du (mapO) = 0;
% compute right hand sides of the semi-discrete PDErhsu = -a*rx.*(Dr*u) + LIFT*(Fscale.*(du));return
3.5 Putting it all together 65
It is easy to see that the exact solution is
u(x, t) = sin(x ! 2!t).
Following the developments in Chapter 2, we assume the local solution is wellapproximated as
ukh(x, t) =
Np!
i=1
ukh(xk
i , t)"ki (x),
and the combination of these local solutions as the approximation to the globalsolution. This yields the local semidiscrete scheme
Mk dukh
dt+ 2!Suk
h ="!k(x)(2!uk
h ! (2!u)!)#xk
r
xkl
,
orduk
h
dt+ 2!(Mk)"1Suk
h = (Mk)"1"!k(x)(2!uk
h ! (2!u)!)#xk
r
xkl
,
The analysis in Example 2.3 shows that choosing the flux as
(2!u)! = 2!u + 2!1 ! #
2[[u]], 0 " # " 1
results in a stable scheme.An implementation of this is shown in AdvecRHS1D.m. The parameter #
can be used to adjust the flux; for example # = 1 is a central flux and # = 0reflects an upwind flux.
AdvecRHS1D.m
function [rhsu] = AdvecRHS1D(u,time, a)
% function [rhsu] = AdvecRHS1D(u,time)% Purpose : Evaluate RHS flux in 1D advection
Globals1D;
% form field differences at facesalpha=1;du = zeros(Nfp*Nfaces,K);du(:) = (u(vmapM)-u(vmapP)).*(a*nx(:)-(1-alpha)*abs(a*nx(:)))/2;
% impose boundary condition at x=0uin = -sin(a*time);du (mapI) = (u(vmapI)- uin ).*(a*nx(mapI)-(1-alpha)*abs(a*nx(mapI)))/2;du (mapO) = 0;
% compute right hand sides of the semi-discrete PDErhsu = -a*rx.*(Dr*u) + LIFT*(Fscale.*(du));return
and the stable flux
Monday, June 7, 2010
Putting it together
We always formulate it through 3 partsPutting the pieces together in a codeTo build your own solver using the DGFEM codes
AdvecDriver1D Matlab main function for solving the 1D advection equation.Advec1D(u,FinalTime) Matlab function for time integration of the semidiscrete PDE.AdvecRHS1D(u,time,a) Matlab function defining right hand side of semidiscrete PDE
DGFEM Toolbox\1D
AdvecDriver1D Advec1D AdvecRHS1D
! Several examples, functions, utilities in DGFEM package! Fast proto-typing! Write your own solver independent on mesh
39 / 58
Monday, June 7, 2010
Putting it together - an example
3.5 Putting it all together 65
It is easy to see that the exact solution is
u(x, t) = sin(x ! 2!t).
Following the developments in Chapter 2, we assume the local solution is wellapproximated as
ukh(x, t) =
Np!
i=1
ukh(xk
i , t)"ki (x),
and the combination of these local solutions as the approximation to the globalsolution. This yields the local semidiscrete scheme
Mk dukh
dt+ 2!Suk
h ="!k(x)(2!uk
h ! (2!u)!)#xk
r
xkl
,
orduk
h
dt+ 2!(Mk)"1Suk
h = (Mk)"1"!k(x)(2!uk
h ! (2!u)!)#xk
r
xkl
,
The analysis in Example 2.3 shows that choosing the flux as
(2!u)! = 2!u + 2!1 ! #
2[[u]], 0 " # " 1
results in a stable scheme.An implementation of this is shown in AdvecRHS1D.m. The parameter #
can be used to adjust the flux; for example # = 1 is a central flux and # = 0reflects an upwind flux.
AdvecRHS1D.m
function [rhsu] = AdvecRHS1D(u,time, a)
% function [rhsu] = AdvecRHS1D(u,time)% Purpose : Evaluate RHS flux in 1D advection
Globals1D;
% form field differences at facesalpha=1;du = zeros(Nfp*Nfaces,K);du(:) = (u(vmapM)-u(vmapP)).*(a*nx(:)-(1-alpha)*abs(a*nx(:)))/2;
% impose boundary condition at x=0uin = -sin(a*time);du (mapI) = (u(vmapI)- uin ).*(a*nx(mapI)-(1-alpha)*abs(a*nx(mapI)))/2;du (mapO) = 0;
% compute right hand sides of the semi-discrete PDErhsu = -a*rx.*(Dr*u) + LIFT*(Fscale.*(du));return
Monday, June 7, 2010
Temporal integrationAs we have already discussed we use RK methods
3.4 Dealing with time 63
rk4a = [ 0.0 ...-567301805773.0/1357537059087.0 ...-2404267990393.0/2016746695238.0 ...-3550918686646.0/2091501179385.0 ...-1275806237668.0/842570457699.0];
rk4b = [ 1432997174477.0/9575080441755.0 ...5161836677717.0/13612068292357.0 ...1720146321549.0/2090206949498.0 ...3134564353537.0/4481467310338.0 ...2277821191437.0/14882151754819.0];
rk4c = [ 0.0 ...1432997174477.0/9575080441755.0 ...2526269341429.0/6820363962896.0 ...2006345519317.0/3224310063776.0 ...2802321613138.0/2924317926251.0];
3.4 Dealing with time
The emphasis so far has been on the spatial dimension and a discrete rep-resentation of this. This reflects a method-of-lines approach where we dis-cretize space and time separately, and use some standard technique to solvethe ordinary di!erential equations for the latter.
We follow this approach here and, furthermore, focus on the use of explicitRunge-Kutta (RK) methods for integration in the temporal dimension. Todiscretize the semidiscrete problem
duh
dt= Lh (uh, t) ,
where uh is the vector of unknowns, we can use the standard fourth-orderfour stage explicit RK method (ERK)
k(1) = Lh (unh, tn) ,
k(2) = Lh
!un
h +12!tk(1), tn +
12!t
",
k(3) = Lh
!un
h +12!tk(2), tn +
12!t
",
k(4) = Lh
#un
h + !tk(3), tn + !t$
,
un+1h = un
h +16!t
#k(1) + 2k(2) + 2k(3) + k(4)
$, (3.4)
to advance from unh to un+1
h , separated by the timestep, !t.As simple and widely used as this classic ERK approach is, it has the
disadvantage that it requires four extra storage arrays, k(i). An attractive
64 3 Making it work in one dimension
Table 3.2. Coe!cients for the low-storage five-stage fourth-order ERK method(LSERK).
i ai bi ci
1 014329971744779575080441755
0
2 ! 5673018057731357537059087
516183667771713612068292357
14329971744779575080441755
3 !24042679903932016746695238
17201463215492090206949498
25262693414296820363962896
4 !35509186866462091501179385
31345643535374481467310338
20063455193173224310063776
5 !1275806237668842570457699
227782119143714882151754819
28023216131382924317926251
alternative to this is a low-storage version [46] of the fourth-order method(LSERK) of the form
p(0) = un,
i ! [1, . . . , 5] :
!k(i) = aik
(i!1) + !tLh
"p(i!1), tn + ci!t
#,
p(i) = p(i!1) + bik(i),
un+1h = p(5). (3.5)
The coe!cients needed in the LSERK are given in Table 3.2. The main di"er-ence here is that only one additional storage level is required, thus reducingthe memory usage significantly. On the other hand, this comes at the price ofan additional function evaluation, as the low-storage version has five stages.
At first, it would seem that the additional stage makes the low-storageapproach less interesting due to the added cost. However, as we will discussin more detail in Chapter 4, the added cost in the low-storage RK is o"set byallowing a larger stable timestep, !t.
It should be emphasized that these methods are not exclusive and manyalternatives exist [40, 143, 144]. In particular, for strongly nonlinear problems,the added nonlinear stability of strong stability-preserving methods may beadvantageous, as we will discuss further in Chapter 5.
3.5 Putting it all together
Let us now return to the simple linear wave equation
"u
"t+ 2#
"u
"x= 0, x ! [0, 2#],
u(x, 0) = sin(x),u(0, t) = " sin(2#t).
64 3 Making it work in one dimension
Table 3.2. Coe!cients for the low-storage five-stage fourth-order ERK method(LSERK).
i ai bi ci
1 014329971744779575080441755
0
2 ! 5673018057731357537059087
516183667771713612068292357
14329971744779575080441755
3 !24042679903932016746695238
17201463215492090206949498
25262693414296820363962896
4 !35509186866462091501179385
31345643535374481467310338
20063455193173224310063776
5 !1275806237668842570457699
227782119143714882151754819
28023216131382924317926251
alternative to this is a low-storage version [46] of the fourth-order method(LSERK) of the form
p(0) = un,
i ! [1, . . . , 5] :
!k(i) = aik
(i!1) + !tLh
"p(i!1), tn + ci!t
#,
p(i) = p(i!1) + bik(i),
un+1h = p(5). (3.5)
The coe!cients needed in the LSERK are given in Table 3.2. The main di"er-ence here is that only one additional storage level is required, thus reducingthe memory usage significantly. On the other hand, this comes at the price ofan additional function evaluation, as the low-storage version has five stages.
At first, it would seem that the additional stage makes the low-storageapproach less interesting due to the added cost. However, as we will discussin more detail in Chapter 4, the added cost in the low-storage RK is o"set byallowing a larger stable timestep, !t.
It should be emphasized that these methods are not exclusive and manyalternatives exist [40, 143, 144]. In particular, for strongly nonlinear problems,the added nonlinear stability of strong stability-preserving methods may beadvantageous, as we will discuss further in Chapter 5.
3.5 Putting it all together
Let us now return to the simple linear wave equation
"u
"t+ 2#
"u
"x= 0, x ! [0, 2#],
u(x, 0) = sin(x),u(0, t) = " sin(2#t).
There are many suitable alternatives
Monday, June 7, 2010
Putting it together - an example66 3 Making it work in one dimension
To complete the scheme for solving the advection equation, we need onlyintegrate the system in time. As discussed in Section 3.4, we do this using alow-storage RK method. This is implemented in Advec1D.m.
Advec1D.m
function [u] = Advec1D(u, FinalTime)
% function [u] = Advec1D(u, FinalTime)% Purpose : Integrate 1D advection until FinalTime starting with% initial the condition, u
Globals1D;time = 0;
% Runge-Kutta residual storageresu = zeros(Np,K);
% compute time step sizexmin = min(abs(x(1,:)-x(2,:)));CFL=0.75; dt = CFL/(2*pi)*xmin; dt = .5*dt;Nsteps = ceil(FinalTime/dt); dt = FinalTime/Nsteps;
% advection speeda = 2*pi;
% outer time step loopfor tstep=1:Nsteps
for INTRK = 1:5timelocal = time + rk4c(INTRK)*dt;[rhsu] = AdvecRHS1D(u, timelocal, a);resu = rk4a(INTRK)*resu + dt*rhsu;u = u+rk4b(INTRK)*resu;
end;% Increment timetime = time+dt;
end;return
One issue we have not discussed yet is how to choose the timestep to ensurea discretely stable scheme. In Advec1D.m we use the guideline
!t ! C
2"mink,i
!xki ,
where !xki = xk
i+1"xki and 2" is the maximum wave speed. The constant, C,
is a Courant-Friedrichs-Levy (CFL)-like constant that can be expected to beO(1). We will discuss this in more detail in Chapter 4 but the above expressionis an excellent guideline.
Monday, June 7, 2010
Putting it together - an example
3.6 Maxwell’s equations 67
Finally, we need a driver routine where the grid information is read in, theglobal grid and the metric are assembled as discussed in Section 3.3, and theinitial conditions are set. An example of this is shown in AdvecDriver1D.m.All results presented in Chapter 2 have been obtained using this simple code.
AdvecDriver1D.m
% Driver script for solving the 1D advection equationsGlobals1D;
% Order of polymomials used for approximationN = 8;
% Generate simple mesh[Nv, VX, K, EToV] = MeshGen1D(0.0,2.0,10);
% Initialize solver and construct grid and metricStartUp1D;
% Set initial conditionsu = sin(x);
% Solve ProblemFinalTime = 10;[u] = Advec1D(u,FinalTime);
It is worth emphasizing that the above three routines, AdvecRHS1D.m,Advec1D.m, and AdvecDriver1D.m, are all that is needed to solve the problemat any order and using any grid. In fact, only AdvecRHS1D.m requires sub-stantial changes to solve another problem as we will see next, where we discussthe routines for a more complex problem. The grid information is generatedin MeshGen1D.m, which is described in Appendix B.
3.6 Maxwell’s equations
As a slightly more complicated problem, let us consider the one-dimensionalMaxwell’s equations
!(x)"E
"t= !"H
"x, µ(x)
"H
"t= !"E
"x, (3.6)
where (E,H) represent the electric and magnetic fields, respectively, and thematerial parameters, !(x) and µ(x), reflect the electric permittivity and mag-netic permeability, respectively.
We solve these equations in a fixed domain x " [!2, 2] with both !(x) andµ(x) changing discontinuously at x = 0. For simplicity, we assume that the
Structure is the same for any time-dependent problem.
Often only the first routine needs any work.
Monday, June 7, 2010
Putting it together - an example
2.2 Basic elements of the schemes 29
0 0.25 0.5 0.75 1!1.25
!1
!0.75
!0.5
!0.25
0
0.25
0.5
0.75
1
1.25
x/2!
u(x,2
!)
0 0.25 0.5 0.75 1!1.25
!1
!0.75
!0.5
!0.25
0
0.25
0.5
0.75
1
1.25
x/2!
u(x,2!)
Fig. 2.2. On the left we compare the solution to the wave equation, computedusing an N = 1, K = 12 discontinuous Galerkin method with a central flux. On theright the same computation employs an upwind flux. In both cases, the dashed linerepresents the exact solution.
and K = 12 elements, we show the solution at T = 2!, computed using acentral flux. We note in particular the discontinuous nature of the solution,which is a characteristic of the family of methods discussed here. To contrastthis, we show on the right of Fig. 2.2 the same solution computed using apure upwind flux. This leads to a solution with smaller jumps between theelements. However, the dissipative nature of the upwind flux is also apparent.An important lesson to learn from this is that a visually smoother solutionis not necessarily a more accurate solution, although in the case consideredhere, the global errors are comparable.
Many of the observations made in the above example regarding high-ordermethods can be put on firmer ground through an analysis of the phase errors.Although we will return to some of this again in Chapter 4, other insightfuldetails can be found in [155, 159, 208, 307].
The example illustrates that to solve a given problem to a specific accuracy,one is most likely best o! by having the ability to choose the element size, h,as well as the order of the scheme, N , independently, and preferably in a localmanner. The ability to do this is one of the main advantages of the family ofschemes discussed in this text.
2.2.2 An alternative viewpoint
Before we continue, it is instructive to consider an alternative derivation. Themain dilemma posed by the choice of a piecewise polynomial representationof uh with no a priori assumptions about continuity is how to evaluate agradient. Starting with a solution defined on each of the elements of the grid,uk
h, we can imagine continuing the function from the boundaries and beyondthe element. In the one-dimensional case, this is achieved by adding two scaledHeaviside functions, defined as
Putting it together - an example
22 2 The key ideas
Example 2.4. Consider Eq. (2.1) as
∂u
∂t− 2π
∂u
∂x= 0, x ∈ [0, 2π],
with periodic boundary conditions and initial condition as
u(x, 0) = sin(lx), l =2π
λ,
where λ is the wavelength. We use the strong form, Eq. (2.8), although for this simple example, theweak form yields identical results. The nodes are chosen as the Legendre-Gauss-Lobatto nodes as weshall discuss in detail in Chapter 3. An upwind flux is used and a fourth-order explicit Runge-Kuttamethod is employed to integrate the equations in time with the timestep chosen small enough toensure that timestep errors can be neglected (See Chapter 3 for details on the implementation).
In Table 2.1 we list a number of results, showing the global L2-error at final time T = π as afunction of the number of elements, K, and the order of the local approximation, N . Inspectingthese results, we observe several things. First, the scheme is clearly convergent and there are tworoads to a converged result; one can increase the local order of approximation, N , and/or one canincrease the number of elements, K.
Table 2.1. Global L2-errors when solving the wave equation using K elements each with a local orderof approximation, N . Note that for N = 8, the finite precision dominates and destroys the expectedconvergence rate.
N\ K 2 4 8 16 32 64 Convergence rate1 – 4.0E-01 9.1E-02 2.3E-02 5.7E-03 1.4E-03 2.02 2.0E-01 4.3E-02 6.3E-03 8.0E-04 1.0E-04 1.3E-05 3.04 3.3E-03 3.1E-04 9.9E-06 3.2E-07 1.0E-08 3.3E-10 5.08 2.1E-07 2.5E-09 4.8E-12 2.2E-13 5.0E-13 6.6E-13 9.0
The rate by which the results converge are not, however, the same when changing N and K. Ifwe define h = 2π/K as a measure of the size of the local element, we observe that
u− uhΩ,h ≤ ChN+1.
Thus, it is the order of the local approximation that gives the fast convergence rate. The constant,C, does not depend on h, but it may depend on the final time, T , of the solution. To highlight this,we consider in Table 2.2 the same problem but solved at different final times, T .
This indicates a linear scaling in time as
u− uhΩ,h ≤ C(T )hN+1 (c1 + c2T )hN+1,
Clearly, c1 and c2 are problem-dependent constants.The high accuracy reflected in Table 2.1, however, comes at a price. In Table 2.3 we show the
approximate execution times, scaled to one for (N,K) = (1, 2), for all the examples in Table 2.1.The execution time and, thus, the computational effort scales approximately like
2.2 Basic elements of the schemes 29
0 0.25 0.5 0.75 1!1.25
!1
!0.75
!0.5
!0.25
0
0.25
0.5
0.75
1
1.25
x/2!
u(x,2
!)
0 0.25 0.5 0.75 1!1.25
!1
!0.75
!0.5
!0.25
0
0.25
0.5
0.75
1
1.25
x/2!
u(x,2!)
Fig. 2.2. On the left we compare the solution to the wave equation, computedusing an N = 1, K = 12 discontinuous Galerkin method with a central flux. On theright the same computation employs an upwind flux. In both cases, the dashed linerepresents the exact solution.
and K = 12 elements, we show the solution at T = 2!, computed using acentral flux. We note in particular the discontinuous nature of the solution,which is a characteristic of the family of methods discussed here. To contrastthis, we show on the right of Fig. 2.2 the same solution computed using apure upwind flux. This leads to a solution with smaller jumps between theelements. However, the dissipative nature of the upwind flux is also apparent.An important lesson to learn from this is that a visually smoother solutionis not necessarily a more accurate solution, although in the case consideredhere, the global errors are comparable.
Many of the observations made in the above example regarding high-ordermethods can be put on firmer ground through an analysis of the phase errors.Although we will return to some of this again in Chapter 4, other insightfuldetails can be found in [155, 159, 208, 307].
The example illustrates that to solve a given problem to a specific accuracy,one is most likely best o! by having the ability to choose the element size, h,as well as the order of the scheme, N , independently, and preferably in a localmanner. The ability to do this is one of the main advantages of the family ofschemes discussed in this text.
2.2.2 An alternative viewpoint
Before we continue, it is instructive to consider an alternative derivation. Themain dilemma posed by the choice of a piecewise polynomial representationof uh with no a priori assumptions about continuity is how to evaluate agradient. Starting with a solution defined on each of the elements of the grid,uk
h, we can imagine continuing the function from the boundaries and beyondthe element. In the one-dimensional case, this is achieved by adding two scaledHeaviside functions, defined as
Monday, June 7, 2010
A slightly more complicated example
Consider linear system
uv
t
+
0 ab 0
uv
x
= 0
How do we choose the flux in this case ?
Clearly ab>0 for hyperbolicity
Recall that A =
0 ab 0
S =
1√b− 1√
b1√a
1√a
S−1 = 1
2
√b
√a
−√
b√
a
Λ =
√ab 00 −
√ab
S−1AS = Λ
With
Monday, June 7, 2010
A slightly more complicated example
This allows us to transform the equation as
S−1
uv
t
+ ΛS−1
uv
x
= 0
So now we know that 12 (√
bu +√
av)
12 (−
√bu +
√av)
propagates right
propagates left
SΛ+
√bu−+
√av−
2 + Λ−−√
bu++√
av+
2
Λ = Λ+ + Λ−
This suggests a flux like
Monday, June 7, 2010
A slightly more complicated example
√a
2
√a(v+ + v−) +
√b(u− − u+)
√b
2
√b(u+ + u−) +
√a(v− − v+)
av +√
ab2 n · [[u]]
bu +√
ab2 n · [[v]]
uv
t
+
0 ab 0
uv
x
= 0
Straightforward manipulations yield
or the equivalent
Recall
we recognize this as the LF flux
Monday, June 7, 2010
Shallow water equations
Consider 1D shallow water equations in periodic domain
Examples: error behaviorConsider the linear shallow water equations in one horizontaldimension on a periodic domain
!
!t
!
"u
"
=
!
0 !h
!g 0
"
!
!x
!
"u
"
Tests of h! and p-refinement
101 102 10310−12
10−10
10−8
10−6
10−4
10−2
Number of degrees of freedom
Erro
r
13
1
5
1
7h−version (P=2)h−version (P=4)h−version (P=6)
0 50 100 150 200 250 300 350
10−10
10−5
100
Number of degrees of freedom
Erro
r
h−version (P=1)p−version (K=20)
Again, the error behaves as
||u ! uh||!,h " ChN+1
35 / 41
Tests of h- and p-refinement
Examples: error behaviorConsider the linear shallow water equations in one horizontaldimension on a periodic domain
!
!t
!
"u
"
=
!
0 !h
!g 0
"
!
!x
!
"u
"
Tests of h! and p-refinement
101 102 10310−12
10−10
10−8
10−6
10−4
10−2
Number of degrees of freedom
Erro
r
13
1
5
1
7h−version (P=2)h−version (P=4)h−version (P=6)
0 50 100 150 200 250 300 350
10−10
10−5
100
Number of degrees of freedom
Erro
r
h−version (P=1)p−version (K=20)
Again, the error behaves as
||u ! uh||!,h " ChN+1
35 / 41
Examples: error behavior
Consider the simple advection equation on a periodic domain
!tu ! 2"!xu = 0, x " [0, 2"], u(x , 0) = sin(lx), l = 2!"
Exact solution is then u(x , t) = sin(l(x ! 2"t))).
Errors at final time T = ".
N\ K 2 4 8 16 32 64 Convergence rate1 - 4.0E-01 9.1E-02 2.3E-02 5.7E-03 1.4E-03 2.02 2.0E-01 4.3E-02 6.3E-03 8.0E-04 1.0E-04 1.3E-05 3.04 3.3E-03 3.1E-04 9.9E-06 3.2E-07 1.0E-08 3.3E-10 5.08 2.1E-07 2.5E-09 4.8E-12 2.2E-13 5.0E-13 6.6E-13 != 9.0
Error is seen to behave as
||u ! uh||!,h # ChN+1
Clearly, paths to convergence are based on adjusting the size ofelements (h-convergence), the polynomial order (p-convergence) orcombinations hereo!.
34 / 41
Monday, June 7, 2010
Putting it to use - an exampleConsider Maxwell’s equations
3.6 Maxwell’s equations 67
Finally, we need a driver routine where the grid information is read in, theglobal grid and the metric are assembled as discussed in Section 3.3, and theinitial conditions are set. An example of this is shown in AdvecDriver1D.m.All results presented in Chapter 2 have been obtained using this simple code.
AdvecDriver1D.m
% Driver script for solving the 1D advection equationsGlobals1D;
% Order of polymomials used for approximationN = 8;
% Generate simple mesh[Nv, VX, K, EToV] = MeshGen1D(0.0,2.0,10);
% Initialize solver and construct grid and metricStartUp1D;
% Set initial conditionsu = sin(x);
% Solve ProblemFinalTime = 10;[u] = Advec1D(u,FinalTime);
It is worth emphasizing that the above three routines, AdvecRHS1D.m,Advec1D.m, and AdvecDriver1D.m, are all that is needed to solve the problemat any order and using any grid. In fact, only AdvecRHS1D.m requires sub-stantial changes to solve another problem as we will see next, where we discussthe routines for a more complex problem. The grid information is generatedin MeshGen1D.m, which is described in Appendix B.
3.6 Maxwell’s equations
As a slightly more complicated problem, let us consider the one-dimensionalMaxwell’s equations
!(x)"E
"t= !"H
"x, µ(x)
"H
"t= !"E
"x, (3.6)
where (E,H) represent the electric and magnetic fields, respectively, and thematerial parameters, !(x) and µ(x), reflect the electric permittivity and mag-netic permeability, respectively.
We solve these equations in a fixed domain x " [!2, 2] with both !(x) andµ(x) changing discontinuously at x = 0. For simplicity, we assume that the
-1 10
ε1, µ1 ε2, µ2
BC: E(-1,t)=E(1,t)=0 -- a PEC cavity
Interface conditions: Continuity of fields
The problem has a nice exact solution
Monday, June 7, 2010
Putting it to use - an example
Scheme becomes
68 3 Making it work in one dimension
materials are otherwise piecewise constant. Relaxing this would only requireminor modifications. Furthermore, we assume that E(!2, 0) = E(2, 0) = 0,corresponding to a physical situation where the computational domain isenclosed in a metallic cavity.
To develop the scheme, we take the usual path and seek an approximation(E,H) " (Eh,Hh) being composed as the direct sum of K local polynomials(Ek
h,Hkh) on the form
!Ek
h(x, t)Hk
h(x, t)
"=
Np#
i=1
!Ek
h(xki , t)
Hkh(xk
i , t)
"!ki (x).
Introducing this into Eq. (3.6) and requiring Maxwell’s equations to be satis-fied locally on the strong discontinuous Galerkin form yields the semidiscretescheme
dEkh
dt+
1Jk"k
DrHkh =
1Jk"k
M!1$!k(x)(Hk
h ! H")%xk
r
xkl
=1
Jk"kM!1
& xkr
xkl
n · (Hkh ! H")!k(x) dx,
dHkh
dt+
1Jkµk
DrEkh =
1Jkµk
M!1$!k(x)(Ek
h ! E")%xk
r
xkl
=1
Jk"kM!1
& xkr
xkl
n · (Ekh ! E")!k(x) dx.
As the fluxes, we will use those derived in Example 2.6 on the form
H" =1
Z
'ZH +
12[[E]]
(, E" =
1Y
'Y E +
12[[H]]
(,
where
Z± =
)µ±
"±= (Y ±)!1.
This yields the terms
H! ! H" =1
2Z*Z+[[H]] ! [[E]]
+,
E! ! E" =1
2Y *Y +[[E]] ! [[H]]
+,
as the penalty terms entering on the right-hand side of the semidiscretescheme. This is all implemented in MaxwellRHS1D.m.
Putting it to use - an example
Scheme becomes
68 3 Making it work in one dimension
materials are otherwise piecewise constant. Relaxing this would only requireminor modifications. Furthermore, we assume that E(!2, 0) = E(2, 0) = 0,corresponding to a physical situation where the computational domain isenclosed in a metallic cavity.
To develop the scheme, we take the usual path and seek an approximation(E,H) " (Eh,Hh) being composed as the direct sum of K local polynomials(Ek
h,Hkh) on the form
!Ek
h(x, t)Hk
h(x, t)
"=
Np#
i=1
!Ek
h(xki , t)
Hkh(xk
i , t)
"!ki (x).
Introducing this into Eq. (3.6) and requiring Maxwell’s equations to be satis-fied locally on the strong discontinuous Galerkin form yields the semidiscretescheme
dEkh
dt+
1Jk"k
DrHkh =
1Jk"k
M!1$!k(x)(Hk
h ! H")%xk
r
xkl
=1
Jk"kM!1
& xkr
xkl
n · (Hkh ! H")!k(x) dx,
dHkh
dt+
1Jkµk
DrEkh =
1Jkµk
M!1$!k(x)(Ek
h ! E")%xk
r
xkl
=1
Jk"kM!1
& xkr
xkl
n · (Ekh ! E")!k(x) dx.
As the fluxes, we will use those derived in Example 2.6 on the form
H" =1
Z
'ZH +
12[[E]]
(, E" =
1Y
'Y E +
12[[H]]
(,
where
Z± =
)µ±
"±= (Y ±)!1.
This yields the terms
H! ! H" =1
2Z*Z+[[H]] ! [[E]]
+,
E! ! E" =1
2Y *Y +[[E]] ! [[H]]
+,
as the penalty terms entering on the right-hand side of the semidiscretescheme. This is all implemented in MaxwellRHS1D.m.
68 3 Making it work in one dimension
materials are otherwise piecewise constant. Relaxing this would only requireminor modifications. Furthermore, we assume that E(!2, 0) = E(2, 0) = 0,corresponding to a physical situation where the computational domain isenclosed in a metallic cavity.
To develop the scheme, we take the usual path and seek an approximation(E,H) " (Eh,Hh) being composed as the direct sum of K local polynomials(Ek
h,Hkh) on the form
!Ek
h(x, t)Hk
h(x, t)
"=
Np#
i=1
!Ek
h(xki , t)
Hkh(xk
i , t)
"!ki (x).
Introducing this into Eq. (3.6) and requiring Maxwell’s equations to be satis-fied locally on the strong discontinuous Galerkin form yields the semidiscretescheme
dEkh
dt+
1Jk"k
DrHkh =
1Jk"k
M!1$!k(x)(Hk
h ! H")%xk
r
xkl
=1
Jk"kM!1
& xkr
xkl
n · (Hkh ! H")!k(x) dx,
dHkh
dt+
1Jkµk
DrEkh =
1Jkµk
M!1$!k(x)(Ek
h ! E")%xk
r
xkl
=1
Jk"kM!1
& xkr
xkl
n · (Ekh ! E")!k(x) dx.
As the fluxes, we will use those derived in Example 2.6 on the form
H" =1
Z
'ZH +
12[[E]]
(, E" =
1Y
'Y E +
12[[H]]
(,
where
Z± =
)µ±
"±= (Y ±)!1.
This yields the terms
H! ! H" =1
2Z*Z+[[H]] ! [[E]]
+,
E! ! E" =1
2Y *Y +[[E]] ! [[H]]
+,
as the penalty terms entering on the right-hand side of the semidiscretescheme. This is all implemented in MaxwellRHS1D.m.
68 3 Making it work in one dimension
materials are otherwise piecewise constant. Relaxing this would only requireminor modifications. Furthermore, we assume that E(!2, 0) = E(2, 0) = 0,corresponding to a physical situation where the computational domain isenclosed in a metallic cavity.
To develop the scheme, we take the usual path and seek an approximation(E,H) " (Eh,Hh) being composed as the direct sum of K local polynomials(Ek
h,Hkh) on the form
!Ek
h(x, t)Hk
h(x, t)
"=
Np#
i=1
!Ek
h(xki , t)
Hkh(xk
i , t)
"!ki (x).
Introducing this into Eq. (3.6) and requiring Maxwell’s equations to be satis-fied locally on the strong discontinuous Galerkin form yields the semidiscretescheme
dEkh
dt+
1Jk"k
DrHkh =
1Jk"k
M!1$!k(x)(Hk
h ! H")%xk
r
xkl
=1
Jk"kM!1
& xkr
xkl
n · (Hkh ! H")!k(x) dx,
dHkh
dt+
1Jkµk
DrEkh =
1Jkµk
M!1$!k(x)(Ek
h ! E")%xk
r
xkl
=1
Jk"kM!1
& xkr
xkl
n · (Ekh ! E")!k(x) dx.
As the fluxes, we will use those derived in Example 2.6 on the form
H" =1
Z
'ZH +
12[[E]]
(, E" =
1Y
'Y E +
12[[H]]
(,
where
Z± =
)µ±
"±= (Y ±)!1.
This yields the terms
H! ! H" =1
2Z*Z+[[H]] ! [[E]]
+,
E! ! E" =1
2Y *Y +[[E]] ! [[H]]
+,
as the penalty terms entering on the right-hand side of the semidiscretescheme. This is all implemented in MaxwellRHS1D.m.
Monday, June 7, 2010
Putting it to use - an example3.6 Maxwell’s equations 69
MaxwellRHS1D.m
function [rhsE, rhsH] = MaxwellRHS1D(E,H,eps,mu)
% function [rhsE, rhsH] = MaxwellRHS1D(E,H,eps,mu)% Purpose : Evaluate RHS flux in 1D Maxwell
Globals1D;
% Compute impedanceZimp = sqrt(mu./eps);
% Define field differences at facesdE = zeros(Nfp*Nfaces,K); dE(:) = E(vmapM)-E(vmapP);dH = zeros(Nfp*Nfaces,K); dH(:) = H(vmapM)-H(vmapP);Zimpm = zeros(Nfp*Nfaces,K); Zimpm(:) = Zimp(vmapM);Zimpp = zeros(Nfp*Nfaces,K); Zimpp(:) = Zimp(vmapP);Yimpm = zeros(Nfp*Nfaces,K); Yimpm(:) = 1./Zimpm(:);Yimpp = zeros(Nfp*Nfaces,K); Yimpp(:) = 1./Zimpp(:);
% Homogeneous boundary conditions, Ez=0Ebc = -E(vmapB); dE (mapB) = E(vmapB) - Ebc;Hbc = H(vmapB); dH (mapB) = H(vmapB) - Hbc;
% evaluate upwind fluxesfluxE = 1./(Zimpm + Zimpp).*(nx.*Zimpp.*dH - dE);fluxH = 1./(Yimpm + Yimpp).*(nx.*Yimpp.*dE - dH);
% compute right hand sides of the PDE’srhsE = (-rx.*(Dr*H) + LIFT*(Fscale.*fluxE))./eps;rhsH = (-rx.*(Dr*E) + LIFT*(Fscale.*fluxH))./mu;return
The temporal integration is done using Maxwell1D.m which is essentiallyunchanged from Advec1D.m in the previous section. The only change is thatwe are now integrating a system and, thus, need to integrate all componentsof the system at each stage.
Maxwell1D.m
function [E,H] = Maxwell1D(E,H,eps,mu,FinalTime);
% function [E,H] = Maxwell1D(E,H,eps,mu,FinalTime)% Purpose : Integrate 1D Maxwell’s until FinalTime starting with% conditions (E(t=0),H(t=0)) and materials (eps,mu).
Globals1D;time = 0;
% Runge-Kutta residual storage
Monday, June 7, 2010
Putting it to use - an examplePutting it to use - an example
3.6 Maxwell’s equations 69
MaxwellRHS1D.m
function [rhsE, rhsH] = MaxwellRHS1D(E,H,eps,mu)
% function [rhsE, rhsH] = MaxwellRHS1D(E,H,eps,mu)% Purpose : Evaluate RHS flux in 1D Maxwell
Globals1D;
% Compute impedanceZimp = sqrt(mu./eps);
% Define field differences at facesdE = zeros(Nfp*Nfaces,K); dE(:) = E(vmapM)-E(vmapP);dH = zeros(Nfp*Nfaces,K); dH(:) = H(vmapM)-H(vmapP);Zimpm = zeros(Nfp*Nfaces,K); Zimpm(:) = Zimp(vmapM);Zimpp = zeros(Nfp*Nfaces,K); Zimpp(:) = Zimp(vmapP);Yimpm = zeros(Nfp*Nfaces,K); Yimpm(:) = 1./Zimpm(:);Yimpp = zeros(Nfp*Nfaces,K); Yimpp(:) = 1./Zimpp(:);
% Homogeneous boundary conditions, Ez=0Ebc = -E(vmapB); dE (mapB) = E(vmapB) - Ebc;Hbc = H(vmapB); dH (mapB) = H(vmapB) - Hbc;
% evaluate upwind fluxesfluxE = 1./(Zimpm + Zimpp).*(nx.*Zimpp.*dH - dE);fluxH = 1./(Yimpm + Yimpp).*(nx.*Yimpp.*dE - dH);
% compute right hand sides of the PDE’srhsE = (-rx.*(Dr*H) + LIFT*(Fscale.*fluxE))./eps;rhsH = (-rx.*(Dr*E) + LIFT*(Fscale.*fluxH))./mu;return
The temporal integration is done using Maxwell1D.m which is essentiallyunchanged from Advec1D.m in the previous section. The only change is thatwe are now integrating a system and, thus, need to integrate all componentsof the system at each stage.
Maxwell1D.m
function [E,H] = Maxwell1D(E,H,eps,mu,FinalTime);
% function [E,H] = Maxwell1D(E,H,eps,mu,FinalTime)% Purpose : Integrate 1D Maxwell’s until FinalTime starting with% conditions (E(t=0),H(t=0)) and materials (eps,mu).
Globals1D;time = 0;
% Runge-Kutta residual storage
70 3 Making it work in one dimension
resE = zeros(Np,K); resH = zeros(Np,K);
% compute time step sizexmin = min(abs(x(1,:)-x(2,:)));CFL=1.0; dt = CFL*xmin;Nsteps = ceil(FinalTime/dt); dt = FinalTime/Nsteps;
% outer time step loopfor tstep=1:Nsteps
for INTRK = 1:5[rhsE, rhsH] = MaxwellRHS1D(E,H,eps,mu);
resE = rk4a(INTRK)*resE + dt*rhsE;resH = rk4a(INTRK)*resH + dt*rhsH;
E = E+rk4b(INTRK)*resE;H = H+rk4b(INTRK)*resH;
end% Increment timetime = time+dt;
endreturn
In the final routine, MaxwellDriver1D.m, the only significant di!erence fromAdvecDriver1D.m is the need to also specify the spatial distribution of ! andµ. In this particular case, we have assumed that these are constant in each el-ement but can jump between elements. Furthermore, we assume here that thejump is between the two middle elements. Clearly, this specification is problemdependent and can be changed as needed. Also, if ! and/or µ vary smoothlywithin the elements, this can be specified here with no further changes else-where.
MaxwellDriver1D.m
% Driver script for solving the 1D Maxwell’s equationsGlobals1D;
% Polynomial order used for approximationN = 6;
% Generate simple mesh[Nv, VX, K, EToV] = MeshGen1D(-2.0,2.0,80);
% Initialize solver and construct grid and metricStartUp1D;
% Set up material parameterseps1 = [ones(1,K/2), 2*ones(1,K/2)];mu1 = ones(1,K);
Monday, June 7, 2010
Putting it to use - an examplePutting it to use - an example
70 3 Making it work in one dimension
resE = zeros(Np,K); resH = zeros(Np,K);
% compute time step sizexmin = min(abs(x(1,:)-x(2,:)));CFL=1.0; dt = CFL*xmin;Nsteps = ceil(FinalTime/dt); dt = FinalTime/Nsteps;
% outer time step loopfor tstep=1:Nsteps
for INTRK = 1:5[rhsE, rhsH] = MaxwellRHS1D(E,H,eps,mu);
resE = rk4a(INTRK)*resE + dt*rhsE;resH = rk4a(INTRK)*resH + dt*rhsH;
E = E+rk4b(INTRK)*resE;H = H+rk4b(INTRK)*resH;
end% Increment timetime = time+dt;
endreturn
In the final routine, MaxwellDriver1D.m, the only significant di!erence fromAdvecDriver1D.m is the need to also specify the spatial distribution of ! andµ. In this particular case, we have assumed that these are constant in each el-ement but can jump between elements. Furthermore, we assume here that thejump is between the two middle elements. Clearly, this specification is problemdependent and can be changed as needed. Also, if ! and/or µ vary smoothlywithin the elements, this can be specified here with no further changes else-where.
MaxwellDriver1D.m
% Driver script for solving the 1D Maxwell’s equationsGlobals1D;
% Polynomial order used for approximationN = 6;
% Generate simple mesh[Nv, VX, K, EToV] = MeshGen1D(-2.0,2.0,80);
% Initialize solver and construct grid and metricStartUp1D;
% Set up material parameterseps1 = [ones(1,K/2), 2*ones(1,K/2)];mu1 = ones(1,K);
3.6 Maxwell’s equations 71
epsilon = ones(Np,1)*eps1; mu = ones(Np,1)*mu1;
% Set initial conditionsE = sin(pi*x).*(x<0); H = zeros(Np,K);
% Solve ProblemFinalTime = 10;[E,H] = Maxwell1D(E,H,epsilon,mu,FinalTime);
To illustrate the performance of the developed algorithm, we considera problem consisting of a cavity, x ! ["1, 1], with metallic end walls [i.e.,E(±1, t) = 0]. Furthermore, we assume that x = 0 represents an interfacebetween the two halves of the cavity and that each half is a di!erent homoge-neous material. For simplicity we assume the materials are nonmagnetic (i.e.,µ(m) = 1,m = 1, 2).
This cavity supports a number of resonant modes of the form
E(m)(x, t) = "!A(m) exp(i!n(m)x) " B(m) exp("i!n(m)x)
"exp(i!t),
H(m)(x, t) =!A(m) exp(i!n(m)x) + B(m) exp("i!n(m)x)
"exp(i!t),
whereB(1) = exp("i2n(1)!)A(1), B(2) = " exp(i2n(2)!)A(2),
due to the boundary conditions,
A(1) =n(2) cos(n(2)!)n(1) cos(n(1)!)
, A(2) = exp(i!(n(1) + n(2))),
as a convenient normalization and ! is a solution to the equation
"n(2) tan(!n(1)) = n(1) tan(!n(2)),
found by requiring continuity of the field components across the materialinterface. Here and elsewhere above, the index of refraction is defined asn(m) =
#"(m).
For n(1) = n(2), this is a simple sinusoidal solution, whereas for n(1) $=n(2) the solution is more complex and loses smoothness across the materialinterface where the solution is only continuous.
We first solve the above problem with n(1) = 1.0 (i.e., vacuum), and n(2) =1.5, similar to glass, using di!erent numbers of elements, K, and order ofapproximation, N , in each element. All elements have the same size, h =2/K, and in the first case, we assume that K is even which guarantees thatan element interface is located at x = 0. On the left in Fig. 3.6 we showthe results, illustrating optimal convergence, O(hN+1), once the fields arereasonably resolved.
Monday, June 7, 2010
Putting it to use - an example
Both time-stepping and driver are essentiallyunchanged from the first case.
72 3 Making it work in one dimension
100 101 102 10310!710!610!510!410!310!210!1100101
K
||E!E
h||
h2
h5
N=1
N=2
N=3N=4
100 101 102 103
K
10!6
10!5
10!4
10!3
10!2
10!1
100
101
||E!E
h||
h
h2
N=1
N=2
N=3
N=4
Fig. 3.6. We show the global L2-error as a function of order of approximation, N ,and number of elements, K ! h!1, for the computed E-field solution to Maxwell’sequations in a metallic cavity with a material interface located at x = 0. On the left,the grid is conforming to the geometry; that is, there is an element boundary locatedat x = 0. On the right, this is violated and the material interface is located in themiddle of an element. The loss of optimal convergence highlights the importance ofusing a body-conforming discretization.
Also shown in Fig. 3.6 are the results for the case where K is odd, implyingthat x = 0 falls in the middle of an element. This shows a dramatic reductionin the accuracy of the scheme, which now is second order accurate. We alsoobserve that, for N even, the order is further reduced to first order. This canbe attributed to the fact that for N even, there is a grid point exactly atx = 0, whereas for N odd, x = 0 falls between two grid points. For N beingeven, one can redefine the material constant at x = 0 to be the average inorder to improve the convergence rate. A closer inspection of the result in Fig.3.6 for N = 3 indicates a less than second order convergence and a detailedanalysis reveals that the asymptotic rate should be O(h3/2) [234].
This latter example emphasizes the importance of using a geometry-conforming grid in which the elements align with features where the solutionloses smoothness in order to maintain the global high-order accuracy. How-ever, even if this is not possible, the results in Fig. 3.6 also show that althoughthe rate of convergence is reduced, the absolute error is lower for high-orderapproximations; that is, the constant in front of the error term decreases withincreasing order.
3.7 Exercises
1. Modify the solver for the scalar advection equation in Section 3.5 to solve
!u
!t+ a
!u
!x= 0, x ! [0, 2"],
Aligned grid Mis-aligned grid
Monday, June 7, 2010
Let’s summarizeWe have some the major pieces
‣ Some understanding of the key elements‣ Evidence of good behavior‣ Computationally well conditioned operators‣ Flexible and general implementations
However, many problems remain open
‣ What can we prove in terms of accuracy ?‣ How do we compute fluxes for systems ?‣ What about time-steps ?
Monday, June 7, 2010