sparse grid methods for multi-dimensional integration

Upload: hellsfullsoimhere

Post on 14-Apr-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    1/21

    SPARSE GRID METHODS FOR MULTI-DIMENSIONAL

    INTEGRATION

    SRI VENKATA TAPOVAN LOLLA.

    Abstract. In this paper, we study various methods for multi-dimensional integration (cuba-ture). Specifically, we focus on a technique called sparse-grid method for cubature and illustrateits performance by applying this method to several integration problems arising in physics. We alsoimplement an adaptive-sparse grid scheme which overcomes the drawbacks of regular sparse-grids tosome extent. The performance of regular and adaptive sparse-grids is compared to several competingMonte Carlo algorithms for cubature.

    Key words. cubature, sparse grids, Smolyak construction, adaptive sparse grids, multi-dimensionalintegration

    1. Introduction and motivation. Multidimensional integrals arise in manyareas of of interest, including science, engineering and even finance. Statistical me-chanics, valuation of financial derivatives, discretization of partial differential equa-

    tions, numerical computation of path integrals are but a few examples where high-dimensional integration is essential [16]. In several stochastic numerical methods suchas spectral methods, computation of the spectral coefficients involves the estimationof a high dimensional integral. The dimension of the integrand in these problems canbe as large as several hundreds [37]. The exact computation of these high dimensionalintegrals is usually out of the question since analytical expressions for the integralsare rarely available in practice. Thus, efficient numerical schemes to approximatethese integrals are often sought after. This task is complicated by the fact that often,these integrals are required to be computed to a high level of accuracy, and this canbecome challenging computationally, even for supercomputers [36]. The main rea-son for such a difficulty is the curse of dimensionality. This is due the fact that toachieve a prescribed level of accuracy in computing a multi-dimensional integral, theamount of work required (e.g. number of quadrature points in a quadrature rule) typ-ically grows exponentially with the dimension [25]. Thus, the rates of convergence formoderate-to-large dimensional systems are extremely poor, and this limits the totalaccuracy that can be obtained by conventional methods. Efficient methods for multi-dimensional integration avoid the curse of dimensionality to some extent by takingadvantage of the structure of the function and the level of smoothness it exhibits. Itis an implicit assumption that the function to be integrated is expensive to evaluate.Thus, an efficient integration method limits the number of function evaluations in theapproximation to a minimum and focuses on careful selection of the node points atwhich the function is evaluated.

    In this term paper, we explore various methods for multi-dimensional integration.Specifically, we aim to study strategies to evaluate the d-dimensional integral:

    Idf =

    f(x) dx, = [1, 1]d

    .

    We will focus on a certain class of numerical methods called sparse grid methods whichapproximate the integral to any desired accuracy. We also describe and implementa dimension adaptive sparse grid method and study the improvement it offers overregular sparse grid methods in terms of reduction in the number of quadrature pointsand integration error.

    1

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    2/21

    2

    1.1. Layout. This paper is organized as follows: In section 2, we perform anexhaustive literature review on existing methods for multi-dimensional integration anddescribe their historical developments, tracking works related to them until the recentpast. Original articles will be cited wherever possible. The advantages and drawbacks

    and other characteristics of all the methods will be described in detail. In section 3,we describe four rules for one-dimensional quadrature that we later extend to studythe multi-dimensional quadrature schemes. Error bounds and typical performancecharacteristics will be highlighted. In section 4, we describe various methods to extendthe results of 1-D quadrature rules to the multi-dimensional case. Here, we presentthe sparse grid method of Smolyak [12]. We also describe the dimension adaptivesparse grid method originally proposed in [36]. We present the results of the abovealgorithms in evaluating a wide variety of integrals in section 5. We compare ourresults against those obtained by the open-source cubature package, CUBA [44]. Wesummarize the paper and our findings in section 6.

    2. Literature Review. Multi-dimensional integration (or, cubature) is a well-researched topic. Literature in this area dates back at least to the time of Gauss. It

    is still, a very active area of research as no numerical scheme has yet been devisedthat is vastly superior to the presently available ones. Literature for numerical 1-Dintegration is even more widespread. Several textbooks on numerical integration suchas [24], [25] limit their focus to the 1-D case due to the large number of numericalschemes available. Early works of Newton and Cotes [24] in approximating 1-D inte-grals by a weighted summation of the function value at equally spaced points have ledto schemes like the Simpsons rule, trapezoidal rule, Booles rule etc. These schemesrely on approximating the integral of a given function by the sum of areas of regularpolygons obtained by partitioning the interval equally. Such a numerical scheme thatapproximates the integral by a weighted sum of function values evaluated at speciallychosen points is called a quadrature. The weights in the Newton-Cotes formulasare computed by integration of a Lagrange interpolant through the correspondingquadrature point. Seminal works of Gauss leading to various Gauss quadrature rules

    are based on a similar idea. The key difference between Gauss quadrature and theNewton-Cotes method is that the quadrature points are not uniform, but correspondto roots of a family of orthogonal polynomials [24], [26]. We will discuss Gaussquadrature formulae in detail in section 3. A more recent quadrature scheme for 1-Dintegration is the Clenshaw-Curtis quadrature [2]. This is based on an approximaterepresentation of the function in terms of Chebyshev polynomials, which are thenintegrated exactly.

    In the case of multiple dimensional integrals, a quadrature approximation can besequentially performed in each direction individually to obtain an approximate valuefor the full integral [23]. This amounts to a tensor product approximation, based onproduct rules, of the full integral. Such classical quadrature techniques soon run intotrouble when the dimension of the integral d, becomes large. For a given accuracylevel , the number of quadrature points N needed to obtain the required accuracy

    scales as [25]:

    (N) = O

    Nr/d

    ,

    for functions with bounded derivatives up to order r. This clearly suggests thatfor moderate dimensions, the order of convergence is extremely slow, and an highaccuracy for the integral cannot be expected, unless the function is special.

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    3/21

    3

    Sparse-grid methods are largely based on the algorithm proposed by Smolyak [12]in 1963. They alleviate the curse of dimensionality to some extent for certain classes offunctions that have bounded mixed derivatives [16]. In this method, the multivariatequadrature formulas are constructed by a combination of tensor products of univariate

    difference quadrature formulas. The basic idea behind the method follows from theobservation that a given level of accuracy in evaluating a multidimensional integral canbe obtained by using much fewer points than the full tensor product approximation,i.e. the quadrature points are much sparser than a full tensor product rule [17]. Ofall the possible index combinations, the only indices that are acceptable are the onesthat lie within a unit simplex (this will be described in detail in section 3). Hence thename, sparse-grid methods. In these methods, the smoothness of the integrand playsa crucial role in the computational complexity. For functions with mixed derivativesbounded up to order r, the number of points N required to obtain an accuracy of scales as [36]:

    (N) = O

    Nr(log N)(d1)(r+1)

    .

    This clearly indicates an improvement over the full tensor product rule. In fact,for infinitely smooth functions (r ), convergence can be up to exponential.Smolyaks sparse grid construction has been utilized in several recent works in theareas of wavelets [50], solution of partial differential equations, [27, 51, 52] and datacompression [53], to name a few. Further recent works may be found in [16]. Eventhough sparse grid methods offer a considerable advantage over the full tensor prod-uct approximation, their convergence rates become lower as d increases due to thedependence on the log N term.

    A recent research thrust has been to develop sparse grid schemes which havebetter convergence rates, without compromising too much on the accuracy. In certainintegrands, some dimensions are more important than the others. For such functions,fewer quadrature points can be used in the less important dimensions and more pointscan be concentrated in the more important regions. This leads to adaptive sparse grid

    methods. Regular sparse grid methods treat all dimensions with equal importanceand hence have nothing to gain when dimensions are of different importance. Whenthe relative importance of different dimensions is known a priori, (e.g. see [34] for acase of an elliptic equation) different weights may be assigned to different dimensions,which leads to a dimension adaptive sparse grid method, as described in [31, 23].

    Several dimension adaptive schemes (not necessarily sparse-grid) are availablein literature. One of the oldest methods for adaptive cubature was proposed byvan Dooren and de Ridder [35]. In their scheme, the d dimensional hypercube isdivided into several smaller hypercubes and a low order quadrature rule is used toapproximately evaluate the integral over each sub-cube, and obtain an estimate of theintegration error in that smaller cube. The cube with the largest integration error isrecursively sub-divided into smaller d cubes. This process stops when the integrationerror in the smaller sub-cubes is lower than the requested accuracy. Other olderadaptive cubature routines are referenced in [35]. This algorithm was improved byGenz and Malik [33] by introducing an alternate improved strategy for the adaptivesubdivision. The regions where the integrand has large error is identified by computingthe fourth divided differences of the integrand, so that any further bisections of thehyper-rectangle in the adaptive scheme can concentrate on this worst dimension.Other improvements to these adaptive subdivision strategies may be found in [39] andreferences therein. These adaptive sub-division strategies are found to perform best

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    4/21

    4

    for integrands of moderate number of dimensions (2 d 6) but the computationaloverhead in identifying and refining the important dimensions is too large for higherdimensional problems (d > 7).

    Other algorithms have been developed which try and quantify the most important

    dimensions of the integrand. In these algorithms, a high dimensional function isapproximated by sums of lower dimensional functions. Such ideas are common instatistics for regression problems and density estimation. Examples of these are theadditive models [29], multivariate adaptive regression splines (MARS) [28], and theanalysis of variance (ANOVA) decomposition [30]. Examples and references to otheradditive models and dimension reduction techniques can be found in [29].

    When the important dimensions are not known a priori, the weights to be assignedto the dimensions are not known. This is the main drawback to the dimension adaptivesparse grid scheme proposed in [31]. To address this issue, several authors haveproposed general adaptive sparse grids. Bonk [32] proposed an adaptive sparse gridscheme which allows adaptive refinement in both the sub-domains and in the order. Asparse-grid based extrapolation technique keeps the computational cost from blowingup. However, [32] only deals with the case of linear basis functions. This scheme was

    extended by Bungartz and Dirnstorfer [37] to general hierarchical basis functions.However, these two approaches do not work for large dimensional integrals becausethey are designed only to tackle local non-smooth behavior of the function, whichis detrimental to the performance of any quadrature method. Gerstner and Griebel[36] proposed a dimension-adaptive tensor-product quadrature on sparse grids. Weimplement this scheme in this paper and a detailed description of this algorithm is apart of section 3. The scheme works by increasing the level of univariate quadratureby performing a tradeoff between the increased computational effort due to increasednumber of function evaluations and the incremental improvement obtained as a resultof increasing accuracy in that dimension. More recently, Jakeman and Roberts [38]combined the adaptive schemes in [36] and [37] to obtain a robust and flexible sparsegrid scheme that is capable of hierarchical basis adaptivity. The algorithm greedilyselects the subspaces that contribute most to the variability of the integrand. The

    hierarchical surplus of points within each subspace is used as an error criterion forthe hrefinement to concentrate the effort within rapidly varying or discontinuousregions. Thus, it combines the advantages of the schemes in [36] and [37]. Furtherreferences for hadaptivity and padaptivity may be found in [38]. Thus, adaptivesparse grid methods are reported to perform much better than other recursive sub-division methods in multi-dimensional integration for high dimensional functions.

    The other important class of numerical schemes that are at the fore-front ofhigh-dimensional integration are a class of randomized methods called Monte Carloapproaches [40]. The Monte Carlo method is probably the best known representativefor this class of algorithms. The basic idea behind Monte Carlo methods is simple. Thefunction is evaluated at a characteristic set of points that are uniformly distributedthough the hypercube = [1, 1]d. These points constitute the samples of theuniform distribution over . Given samples xi

    d Rd, i = 1, 2, , N, where N is the

    total number of samples, the Monte Carlo approximation to the integral of f(x) over is simply the average of the function values evaluated at the sample points:

    Idf IMCf =Ni=1

    f(xid)

    N.

    The weak law of large numbers states that as N , the Monte Carlo estimate

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    5/21

    5

    IMCf approaches Idf at a rate O(N0.5). In other words, in order to reach a givenaccuracy of , the amount of work (number of Monte Carlo samples, N) requiredscales as:

    (N) = O(N

    0.5

    ).The first obvious observation from the above asymptotic estimate is that the rate ofconvergence of Monte Carlo methods is independent of the dimension of the integral,d. However, this convergence rate is slow for most purposes. Thus, for large d, a highaccuracy is achieved only by using a large number of independent samples N. In fact,a good fraction of the time in Monte Carlo integration is spent in generation of thesamples. This is because for large d, the number of samples needed to adequatelyrepresent the function becomes large.

    This leads us to the so-called Quasi-Monte Carlo algorithms [45] and Latin Hy-percube Sampling [42], which have received considerable attention over the past fewyears. Here, the integrand is evaluated not at random, but structurally determinedpoints such that the discrepancy of these points is smaller than that for randompoints. These methods are improved sampling strategies which force the sampler todraw samples more uniformly distributed in . For Latin Hypercube Sampling, thisis achieved by forcing the sampler to draw realizations within equiprobable bins inthe parameter range. The sample dimension is fixed a priori to define the bins. Inthe Quasi-Monte Carlo method, a low discrepancy deterministic sequence of pointsis generated so as to maximize the uniformity of the sample points. Minimum dis-crepancy is obtained for samples that lie on vertices of a regular grid, which scalesexponentially. To circumvent this, different sequences such as the one proposed bySobol [46] are used. More information about these methods may be obtained from[45].

    For quasi-Monte Carlo methods, the integration error scales as:

    (N) = O(N1(log N)d),

    and is roughly half an order better than the regular Monte Carlo method [45]. Itshould be noted that these error bounds are asymptotic (and deterministic). Clearly,quasi-Monte Carlo methods have a convergence rate that depends on the dimensiond. Thus, they run into similar issues as quadrature based methods when approximat-ing high dimensional integrals. The effectiveness of Quasi-Monte Carlo methods inapproximating high dimensional integrals was studied by Sloan and Wozniackowski[41], where more references may be obtained.

    An attractive feature of Monte Carlo methods is that their characteristics donot depend on the nature (smoothness) of the integrand. Unlike quadrature basedmethods, Monte Carlo approaches do not approximate the integrand by polynomials,and exhibit the same convergence characteristics for all functions. This can however,also be perceived as a drawback as Monte Carlo methods lead to no immediate ad-vantage when trying to evaluate the integral of a highly smooth function. Thus, forsufficiently smooth integrands, sparse grids, specifically adaptive sparse grids are ex-pected to outperform Monte-Carlo methods. Monte Carlo methods can be modifiedif certain important dimensions of the integrand are known a priori. In this case, atechnique called importance sampling may be used to choose the samples from a sam-pling distribution that is different from a uniform distribution [40]. The idea is thatthe sampling distribution is designed so as to concentrate more samples in the im-portant dimensions of the integrand. This also leads to variance reduction techniques

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    6/21

    6

    to accelerate the convergence of the Monte Carlo estimate of the integral. Similarly,for Quasi-Monte Carlo methods, prior sorting of the important dimensions accordingto their importance, leads to better convergence rates, yielding a reduction of theeffective dimension [36]. The reason for this is the better distributional behavior of

    low discrepancy sequences in lower dimensions than in higher ones [43]. An opensource package for cubature, containing several Monte Carlo methods for integrationis freely available on the web [44].

    We must also mention another set of schemes for cubature based on various latticerules [47]. These lattice rules are similar to quadrature rules, the key difference beingthat lattice rules assign equal weights to each lattice point. We do not describe orstudy these methods in detail, but only refer the interested reader to [47] where morerelated works can be found. Another class of cubature schemes which we do not focusmuch on, are based on neural-networks [48]. More information about this can beobtained in [48] and references therein.

    3. One dimensional Quadrature. In this section, we describe some quadra-ture rules for 1-D integration. The rules described here will form the basis of higher-dimensional tensor product quadrature, and also for sparse-grid cubature (both non-adaptive and adaptive). In particular, we describe four 1-D quadrature rules whichwill then be applied to the sparse-grid extension. In the following, we use the notationin [16] to represent various quantities. The exact integral of a function f in d dimen-sions is denoted by Idf, while its quadrature approximation is denoted by Qdl f. Here,l N denotes the level of the quadrature, which governs the number of quadraturepoints used in the approximation, ndl . In the 1-D case, d = 1, and = [1, 1]. Thequadrature approximation to I1f is given by:

    I1f =

    f(x)dx Qdl f :=

    n1li=1

    wlif(xli), (3.1)

    where wli are the weights and xli are the quadrature points. Different quadrature

    rules have different criteria for choosing wli and xli. The sum in (3.1) is a n1l

    point quadrature rule for evaluating I1l f approximately. We shall simply call thisquadrature rule of level l from now on, as n1l is uniquely determined by l for agiven quadrature rule. Typically, n1l O(2

    l), i.e. the number of quadrature pointsroughly doubles with increasing l, while n11 = 1 for every quadrature rule we shall see.Furthermore, we define the grid of the quadrature points xli as:

    1l := {xli : 1 i n1l } . (3.2)

    For d = 1 this set is simply a collection of scalar quantities xli. For d > 1, thequadrature grid is a collection of d-dimensional vectors, xli:

    dl := {xli : 1 i ndl } [1, 1]

    d. (3.3)

    The quadrature formulas are said to be nested if the corresponding grids are nested,i.e.,

    dl dl+1.

    Finally, we introduce the notation for the error in a quadrature rule of level l forddimensional cubature as:

    Edl f =Idf Qdl f . (3.4)

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    7/21

    7

    The error in the 1-D case follows from the above definition. We shall provide variouserror bounds for Edl f in the following sections, assuming certain smoothness condi-tions on f. Let us denote by Cr as the set of all functions which have all boundedmixed derivatives up to and including order r. Finally, we use the following rule as a

    convenience, for all quadrature rules:

    Q11f = 2f(0) Qd1f = 2

    df(0),

    i.e., the quadrature rule for level l = 1 will always be chosen so that the only quadra-ture point is the origin, and it has weight 2. We now describe various quadraturerules for the 1-D case.

    3.1. Trapezoidal Rule. The trapezoidal rule is one of the quadrature formulasof Newton and Cotes [25]. In this rule, the interval ([1, 1] in our case), is dividedby equally spaced abscissas, which form the quadrature points. The integral I1f isthen approximated by the sum of areas of the trapezoids formed by these quadraturepoints and the corresponding heights of the function at these points.

    I1f Q1l f =

    n1l1i=1

    f(xi) + f(xi+1)

    2

    2

    n1l 1(3.5)

    =f(x1)

    n1l 1+

    n1l1i=2

    2

    n1l 1 f(xi) +

    f(xn1l

    )

    n1l 1, (3.6)

    valid for l 2. The number of quadrature points at level l is chosen to be

    n1l = 2l1 + 1, l 2,

    so that the origin is always included in the quadrature set. Clearly, for the trapezoidal

    rule, the weights and quadrature points are given by:

    xli = 1 + (i 1)2

    n1l 1, 1 l n1l , (3.7)

    wli =

    1

    n1l1

    , for i = 1, n1l2

    n1l1

    , for 1 < i < n1l .(3.8)

    We allow the end points {1, 1} to be a part of the quadrature set for l 2. Clearly,trapezoidal rule is a nested quadrature because 1l

    1l+1. It is well-known that the

    trapezoidal rule exhibits a convergence rate given by:

    E1l f = O 22l .For functions periodic in [1, 1] and f Cr, the convergence rate dramatically im-proves to O

    2lr

    [16]. Similar bounds exist for other Newton-Cotes formulas [24].

    Newton-Cotes formulas do not converge for a general integrand f. They convergeonly if f is analytic in a region surrounding the interval of interest [3]. Evaluation oftrapezoidal weights and quadrature points requires no additional effort because theyare exactly known, independent of the integrand f.

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    8/21

    8

    3.2. Clenshaw-Curtis Rule. The Clenshaw-Curtis quadrature rule [2] makesuse of the increased rates of convergence of the trapezoidal rule for periodic functions.For non-periodic functions f, a change of variables x cos makes the integrand 2periodic. The term f(cos ) in the integrand is then approximated by its truncated

    Fourier cosine series. The resulting expansion is then integrated exactly.

    I1f =

    0

    f(cos )sin d =

    0

    a02

    +

    m=1

    amcos(m)

    sin d = a0 +

    k=1

    2a2k1 4k2

    .

    The coefficients of the cosine series expansion are obtained by a trapezoidal rule.

    am =1

    f(cos ) cos(m) d.

    For details, see [2],[3] and also [26]. In practice, the Clenshaw-Curtis quadrature isevaluated by writing the integrand as a weighted sum of function values at the Cheby-shev points, which form the quadrature points. As described by Gentleman [9, 10], theClenshaw-Curtis quadrature weights can be computed by a Fast Fourier Transform.

    See also [8] for a nice discussion on fast construction of Clenshaw-Curtis weights. Inour implementation, we use the inverse Fourier transform method described in [8] tocompute the Clenshaw-Curtis weights. The quadrature points are given by:

    xli = cos(i 1)

    n1l 1, 1 i n1l , (3.9)

    for a quadrature rule with n1l points. In our implementation, we use the practicalClenshaw-Curtis points [3], i.e. we use the end points {1, 1} and the origin is alwaysa member of the quadrature set 1l . In this case, n

    1l is given by:

    n1l = 2l1 + 1, l 2,

    which is the same as in the trapezoidal rule. Like the trapezoidal rule, Clenshaw-

    Curtis quadrature is nested. It is well-known that an n1l point Clenshaw-Curtis ruleintegrates polynomials up to degree n1l 1 exactly. Clenshaw-Curtis quadrature con-verges to the true integral, for any continuous function f. The error in the integrandf Cr scales as:

    E1l f = O(2lr).

    In fact, Trefethen [3] argues that Clenshaw-Curtis quadrature has an error boundedby O(2lr/r) for a rtimes differentiable f.

    3.3. Gauss Quadrature Rule. Gauss quadrature is also an interpolatory quadra-ture scheme like the above two schemes. Several Gauss quadrature schemes exist inpractice. Examples of these are Gauss-Legendre quadrature, Gauss-Hermite quadra-ture etc. Gaussian integration methods approximate more general integrals of the

    form:

    If =

    11

    w(x)f(x) dx,

    where w(x) is a positive weight function over the interval. Typically, these integralsarise frequently in probability where w(x) takes the role of a probability density func-tion (PDF). For w(x) 1, we recover the original integral. The Gaussian quadrature

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    9/21

    9

    nodes are chosen to be the zeros of the polynomials that form an orthogonal familywith respect to the inner product:

    f, g = 1

    1

    w(x)f(x)g(x) dx.

    When w(x) = 1, the family of orthogonal polynomials with respect to this measureis Legendre. Similarly, when w(x) exp(x2), the family of orthogonal polynomialsis Hermite. In our case, we only focus on w(x) = 1. We note here that this family oforthogonal Legendre polynomials {pn(x)} can be generated by an orthogonalizationprocess such as Gram-Schmidt.

    The main idea of Gaussian-Legendre quadrature is to set the quadrature points tobe the zeros of the nth degree Legendre polynomial, pn(x). A family of polynomialsorthogonal with respect to the above inner product typically satisfy a three-termrecurrence relation of the form:

    xpn(x) = n1pn1(x) + npn(x) + npn+1(x),

    where i and i are scalars. In the case of Legendre polynomials, n and n can becomputed analytically [26]:

    n = 0, n =1

    2

    1 (2n)2

    1/2.

    The quadrature points (zeros ofpn(x)) then reduce to the eigenvalues of the symmet-ric tri-diagonal Jacobi matrix with {n} on the principal diagonal and {n} aboveand below it. Eigenvalues of this tri-diagonal matrix can be computed to yield thequadrature points. The method of Golub and Welsch [4] describes a way to calculatethe quadrature weights efficiently. They observed that the weights correspond to thefirst components of the orthonormalized eigenvectors of the Jacobi matrix. Thus, theeigenvalue decomposition of the Jacobi matrix yields both the quadrature points and

    the weights. In our implementation, we use the method in [4] to calculate the weightsand quadrature points. A table of Gaussian quadrature weights and abscissas andseveral related works may be found in [1].

    The number of Gaussian quadrature points n1l at level l is given by:

    n1l = 2l 1, l 1.

    An npoint Gaussian quadrature, exacts all polynomials up to degree 2n 1 ex-actly. However, Gauss quadrature points are not nested, i.e. 1l

    1l+1. The error

    for Gaussian quadrature has the same asymptotic upper bound as Clenshaw-Curtisquadrature, based on the theoretical results in [3].

    3.4. Gauss-Kronrod-Patterson Rule. The Gauss-Kronrod-Patterson rule, (re-ferred to as Gauss-Patterson rule from now), addresses the issue of non-nestednessof the regular Gauss-Legendre quadrature. Kronrod [5] extended an npoint Gaussquadrature formula by n + 1 points so that the resultant (2n + 1)point rule exactlyintegrates polynomials up to degree 3n + 1. This is done in order to make the resul-tant quadrature rule nested. The new points added to the quadrature set are zeros ofStieltjes polynomials. The difference between Gaussian quadrature estimate and itsKronrod extension yield an error estimate of the quadrature (and also an estimate ofthe integral).

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    10/21

    10

    Patterson [6] described a method that recursively iterates Kronrods scheme toobtain the sequence of nested quadrature formulae [16]. The interested reader isreferred to [6] and [7] for more details on this construction. It should be noted thatPatterson extensions do not exist for all Gauss-Legendre polynomials.

    For our purposes, we set Q12 to the 3-point Gauss formula, and Q

    1l for l 3, equal

    to its (l 2) Patterson extension. This yields a total number of quadrature points:

    n1l = 2l 1, l 2,

    and a polynomial degree of exactness of nearly3n1l2 . The error in the integral for

    f Cr is again:

    E1l f = O(2lr).

    Thus, among all the nested quadrature formulae seen above, Gauss-Patterson hasthe highest polynomial exactness. In our implementation, we obtain the weightsand points of Gauss-Patterson rule from the open-source package, QUADPACK [11]

    available on the web for free. In Fig. 3.1, we show the quadrature points for allthe above four rules, for increasing levels l. As expected, trapezoid, Clenshaw-Curtisand Gauss-Patterson points are nested, while Gauss nodes are not. In the followingsection, we describe methods for extending 1-D quadrature rules to higher dimensions.

    Fig. 3.1: Quadrature points (blue dots) for various rules, at various accuracy levels l.

    4. Cubature. In this section, we review some methods for multi-dimensionalintegration. We start with the naive tensor product rule, and describe the sparse gridconstruction of Smolyak [12].

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    11/21

    11

    4.1. Full Tensor Product. Given a function f(x) with x Rd, and a 1-Dquadrature rule:

    Q

    1

    l f =

    n1li=1 w

    lixli,

    the full tensor product rule for approximation of Idf is given by [16]:

    Q1l1 Q

    1ld

    f :=

    n1l1i1=1

    n1ldid=1

    wl1i1 wldidf(xl1i1 , , xldid), (4.1)

    where, li is the desired level of accuracy in dimension i. The weights are simply, theproduct of the weights along each dimension. We can see that the total number ofquadrature points in this case is N =

    di=1 n

    1li

    , which increases exponentially with d.Therefore, it is not very practical as it quickly limits the accuracy one can achieve ineach dimension. It is useful only when integrating low-dimensional functions which are

    highly smooth and do not require large number of quadrature points in any direction.This leads us to sparse grid methods.

    4.2. Smolyak Sparse Grid Construction. Smolyak [12] proposed an algo-rithm using which the computational cost in evaluating a given multi-dimensionalintegral can be reduced to make it significantly better than the full-tensor productapproximation. The basic idea of the method is that in order to approximate amulti-dimensional integral to a given accuracy level, the full tensor product performsmore computations than necessary. The same level of accuracy can be achieved byperforming far fewer computations, by carefully choosing the quadrature points andthe quadrature rule. To this end, given a quadrature rule Q1l f in 1-D, a differencequadrature formula is defined as:

    1

    kf := (Q

    1

    k Q

    1

    k1)f, with Q

    1

    0f = 0. (4.2)

    We see that 1kf is another quadrature rule for f. The grid of quadrature points forthis rule is the union of the grids: 1k

    1k1, which is simply

    1k if Q

    1k is a nested

    quadrature rule. We immediately see the advantage of nested quadratures over non-nested ones as they require fewer (assumed expensive) function evaluations in theapproximate integration. Smolyaks sparse grid construction for the integral Idf isthen:

    Qdl f :=

    |k|1l+d1

    1k1

    1kd

    f, (4.3)

    where k Nd is the multi-index set containing the accuracy levels in each dimen-sion, while l is the desired accuracy level of the cubature. Hence, the summation isperformed only on index sets that lie inside the simplex, |k|1 l + d 1, as opposedto the whole hypercube 1 ki l, i, which is what the full-tensor product does. InFig. 4.1, we show the quadrature points for d = 2 at various levels l. We see that thesparse grid needs far fewer function evaluations (N) than the full tensor product, andthis difference magnifies as l increases. In Fig. 4.2, we plot the sparse-grid cubaturepoints corresponding to all four 1-D quadrature rules. We notice, as expected, thatthe Gauss rule is not nested, and requires more function evaluations. The number

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    12/21

    12

    Fig. 4.1: Quadrature points in Smolyak construction (upper panel) and full-tensor productrule (lower panel) for d = 2 and various levels of accuracy l. Significantly higher number of

    function evaluations (quadrature points, N) are observed for the full-tensor product.

    of cubature points in a sparse-grid method at level l is [20]:

    Ndl = O(2lld1),

    whereas for a full-tensor product rule, the cubature points scale as O(2ld). Efficientmethods for computing the cubature weights for sparse-grids are described in [16, 13,15]. The error of a sparse-grid cubature for f Cr at accuracy level d, is given by[22, 14]:

    Edl f = O

    2lrl(d1)(r+1)

    .

    Several variants of these sparse grid methods exist in the literature. For example,Bungartz and Dirnstorfer [37] extend sparse-grid methods to a higher-order quadra-ture using hierarchical basis functions. Another similar work is [18]. Applications ofsparse-grid quadrature in areas of finance and insurance are explored in [21].

    4.3. Adaptive Sparse Grids. Sparse-grid methods for cubature offer a greatadvantage over regular tensor product rules and even Monte Carlo methods when theintegrand is smooth. However, for non-smooth functions, sparse-grid methods run intotrouble when the number of dimensions increases. This is because 1-D quadraturerules are based on polynomial interpolations, and non-smooth functions require a largedegree polynomial to accurately represent them. Adaptive sparse-grid methods treateach dimension differently. They assess the dimensions according to their importanceand thus reduce the dependence of computational complexity on the dimension. The

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    13/21

    13

    Fig. 4.2: Sparse-grid cubature points for d = 2, l = 6. Gauss quadrature points are notnested.

    dimension-adaptive algorithm finds important dimensions automatically and adaptsby placing more cubature points in those dimensions [36]. In what follows, we describethe dimension-adaptive strategy of Gerstner and Griebel [36].

    The dimension-adaptive quadrature allows general allowable index sets in thesparse-grid summation, i.e. the summation grid is no longer the simplex |k|1 l+d1,but another set that is adaptively built. Their self-adaptive algorithm finds the bestindex set by an iterative procedure. To this end, the notion of an admissible set isdefined. An index set I is said to be admissible, if for all k I, the multi-indices:

    k ej I, for 1 j d, kj > 1.

    Here, ej is the jth unit vector. This condition ensures that isolated multi-indices do

    not arise as candidates in I. This is done so that the telescoping sum of differenceformulas 1kj in the sparse-grid remains valid. The general sparse-grid constructionthen is:

    QdIf := kI

    1k1 1kd f,as long as I remains admissible. However, this adaptive strategy is fruitful if it addscubature points to dimensions with large error. The error in a particular dimension isa property of the integrand f. Thus, the enrichment of the index set in the dimension-adaptive scheme is highly dependent on the nature of f. Therefore, asymptotic errorbounds for the approximation are not available. However, the algorithm does allowfor adaptive detection of dimensions with large error.

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    14/21

    14

    The algorithm starts by assuming the only member of the index set is 1 =[1, 1, , 1]. Indices are added so that, (i) they remain admissible, and (ii) largesterror reduction is achieved. The rule:

    kf = (k1 kd) findicates the incremental improvement achieved by adding multi-index k to the indexset. An indicator gk is used to denote the error indicator associated with a given multi-index k. It combines information from the associated difference term kf and thecomputational complexity involved in its estimation, given by the number of cubaturepoints in its evaluation, nk. The form of gk proposed in [36] is:

    gk max

    kf1f

    , (1 )n1nk

    ,

    where is a term that weighs the relative importance of incremental error reductionwith the cost of evaluating the function at new cubature nodes. = 1 is a greedyapproach that assumes that the function evaluation is cheap or it is well-behaved(smooth). = 0 disregards the error reduction in adding new multi-indices to I.Typically is chosen between these extreme limits by a rough knowledge of thefunction behavior. The forward neighbor of an element k of I is defined as:

    Fk := {k + ej , 1 j d}.

    The next multi-index to be added, s, is selected so that: (i) s / I, (ii) s kIFk,

    and (iii) I {s} is admissible [23]. This is implemented by considering two subsets Oand A of I. The set O contains the old multi-indices which need not be tested anymore, while A contains those which are in consideration for inclusion in I. The set Ois set to {1} initially and A to F1. The multi-index in A with the highest indicator gis added to O and removed from A. A is then completed by the forward neighbors ofk that keep I = A O admissible. The error indicators g for the newly added indicesare computed. This process repeats until the global error indicator := kA gk isgreater than a given tolerance . The algorithm is written as:

    Initialization:

    set O = {1}set A = F1set r =

    kAO kf

    set =

    kA gk

    while ( > ) do

    select k A with largest gkO O kA A/k gkFor s Fk such that (s ej) O for j = 1, , d do

    A A sr r + sf + gs

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    15/21

    15

    end for

    end while

    return r

    5. Applications and results. In this section, we test our sparse-grid imple-mentations to three separate cases of multivariate integration, arising from physicalproblems. These examples have been obtained from [16] for easy bench-marking.We compare our results with both published literature, and algorithms included inthe open-source multi-dimensional integration library CUBA [44], available on theweb. Integrals arising in the chosen applications range from moderate dimensions(3 d 6) to high dimensions d > 7.

    5.1. Test functions. We first consider the integral of the form:

    I =

    1 +

    1

    d

    d [0,1]d

    di=1

    (xi)(1/d) dx

    The exact value of this integral is 1. This integral is chosen because it exhibits a largesensitivity near the origin. In this case, we set d = 5. Our quadrature rules assumedthe basic integral to be in the hypercube [1, 1]d. Thus, we perform a change ofvariables x y+12 to re-write the integral as:

    I =

    1 +

    1

    d

    d2(d+

    1

    d)[1,1]d

    di=1

    (yi + 1)(1/d) dy

    This integral can directly be evaluated using any of the quadrature rules discussedearlier. Here, we only present results comparing our sparse-grid implementation, theadaptive sparse-grid method and the results obtained by using the four availablesolvers in CUBA (VEGAS, SUAVE, DIVONNE and CUHRE). The reader is referredto [44] for a detailed description of these four algorithms. We only mention here that

    VEGAS is the primary quasi-Monte Carlo based integration scheme based on Sobolsequences and importance sampling, whereas SUAVE is a modification of VEGAS,to include an adaptive sub-division strategy. CUHRE is a deterministic integrationmethod based on quadrature rules, while DIVONNE is also a Monte Carlo basedmethod, which uses stratified sampling.

    The results of the above three classes of methods are shown in Fig. 5.1. Here,we plot the absolute error in the integral |Idf Qdl f| versus the total number offunction evaluations necessary in the corresponding rule. There are other possiblecomparison criteria, but we limit ourselves to this one in this paper. We observe frompanel (a), that the Gauss-Patterson rule performs the best among the four quadraturerules discussed. Gauss quadrature also performs well but Gauss-Patterson has lessererror for asymptotically large number of function evaluations. However, the mainnotable point here is that Clenshaw-Curtis rule does not perform well. This is perhapsbecause Clenshaw-Curtis rule places many points near the origin, where the functionus very sensitive. The superiority of Gauss-Patterson rule over Clenshaw-Curtis isexpected to diminish when the dimension d increases as the latter requires fewerfunction evaluations for polynomial exactness when l < d [16].

    The Monte Carlo schemes of CUBA do not perform very well as the exhibit aslower convergence rate compared to the Gauss and Gauss-Patterson rules. Thisagrees with the results shown in [16] and [43]. This is because the function is smooth

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    16/21

    16

    (a) Sparse-Grid (b) CUBA

    (c) Adaptive Sparse-Grid

    Fig. 5.1: Performance of various cubature rules (5.1): absolute error in the integral is plot-ted against the number of function evaluations required. (a) Sparse-grid implementation (allfour quadrature rules), (b) results of CUBA package, and, (c) adaptive sparse-grid imple-

    mentation.

    in the interior of the unit hypercube, and Monte Carlo methods do not take advantageof this fact.

    Finally, the sparse-grid implementation is found to improve the asymptotic per-formance of both the trapezoidal rule and the Clenshaw-Curtis rule, but does notsignificantly affect the Gauss and Gauss-Patterson rules. Our dimension-adaptivestrategy is expected to be most useful when different dimensions have unequal im-

    portance. In this case, the integral is symmetric, and thus, we do not see a notableimprovement in the already accurate Gauss rules.

    5.2. Absorption Problem. This example arises from the transport problemthat describes the behavior of a particle moving through a 1-D slab of unit length[43]. At each step, the particle travels a random distance between [0, 1]. If it does notleave the slab, it may be absorbed with a probability 1 . The integral equation

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    17/21

    17

    describing the motion of this particle is:

    y(x) = x +

    1x

    y(z) dz

    The exact solution to this equation is:

    (a) Sparse-Grid (b) CUBA

    Fig. 5.2: Performance of various cubature rules (5.2): absolute error in the integral is plot-ted against the number of function evaluations required. (a) Sparse-grid implementation (all

    four quadrature rules), (b) results of CUBA package, and, (c) adaptive sparse-grid imple-

    mentation.

    y(x) =1

    1

    e(1x).

    We are interested in the solution y(0) when = 0.5. This result may be obtained byconsidering a simplified multi-dimensional integral (see [43] for details):

    y(x) =

    [0,1]d

    d1n=0

    Fn(x, z)dz,

    where,

    Fn(x, z) = n(1 x)n

    n1

    j=1

    znjj

    1 (1 x) n

    j=1

    zj

    .

    In this problem, we set d = 10. The results obtained by our sparse-grid implementa-tion and the algorithms in CUBA are shown in Fig. 5.2.

    We observe that the trapezoidal rule is again outperformed by the other threerules. Clenshaw-Curtis rule performs as well as Gauss-Patterson and Gauss quadra-ture. We ran the adaptive sparse-grid method for this example using all the 1-D rules,but show the results only for a Clenshaw-Curtis rule as it is similar to both the Gaussrules. We see that the adaptive-sparse grid performs better than any sparse-grid rulefor this problem, as it requires fewer function evaluations for a given accuracy level.

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    18/21

    18

    Similar to the earlier example, the Monte Carlo methods of CUBA do not exhibita rapid convergence rate. This is due to the smooth nature of the integrand andits high-dimensionality. For a non-smooth integrand of equally large dimension, weexpect the performance of Monte-Carlo methods to be comparable to sparse-grid

    methods.5.3. Integral Equation. The second test integral we consider, is also taken

    from [16] and arises from an integral equation obtained using a finite element orboundary element discretization of a problem. We do not provide too many detailsabout the physical problem. These may be obtained in [16]. Given = [0, 1]2, wecompute the 4-dimensional integral:

    ab,c =

    (b1+1)h(b11)h

    (b2+1)h(b21)h

    (c1+1)h(c11)h

    (c2+1)h(c21)h

    bh(x)ch(y)

    |x y|dydx,

    where,

    bh(x) =

    h(x hb) for x hb ,0 else,

    and

    h(x) = max{(1 |x1|/h)(1 |x2|/h), 0}.

    The exact value of the integrals are obtained from [16]. In this example, we seth = 1/32, a = (a1, a2) = (0, 0) and b = (b1, b2) = (0, 3). For these values, theintegral has sharp edges in the interior of the domain. The results of the sparse-grid

    (a) Sparse-Grid (b) CUBA

    Fig. 5.3: Performance of various cubature rules (5.3): absolute error in the integral is plot-ted against the number of function evaluations required. (a) Sparse-grid implementation (all

    four quadrature rules), (b) results of CUBA package, and, (c) adaptive sparse-grid imple-

    mentation.

    implementation of this integral are shown in Fig. 5.3. The sparse-grid results aresimilar to the ones shown for the previous two examples. Trapezoidal rule performs

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    19/21

    19

    the worst among all four 1-D quadrature rules. Gauss and Gauss-Patterson rules inthis example yield identical error values up to the third decimal place. Both theserules converge adequately well, although Gauss-Patterson requires far fewer functionevaluations to yield the same error level. Clenshaw-Curtis rule converges at a rate

    comparable to the Gaussian rules. Since the Gauss rules converge the fastest here, weonly show results of the adaptive-sparse grid method for the Gauss quadrature. Asin the case of example 1, the adaptive sparse-grid method only marginally improvesthe performance of regular sparse-grid method.

    In this example, we observe that the Monte-Carlo methods perform as well as theleading quadrature rules. All the four Monte Carlo rules converge at rates comparableto the Gauss quadrature. This is because of the non-smooth nature of the integrandin the interior of the domain. This is detrimental to the performance of quadraturerules, but does not affect the Monte Carlo based methods. Also, in this case, thedimension of the integrand is low d = 4, which means the hyper-cube is well-sampledeven by a moderate number of points.

    6. Conclusions. In this term paper, we have explored various methods for

    multi-dimensional integration. We performed an exhaustive literature review on thistopic, summarizing most of the relevant work and providing references for more ob-scure works. Our main focus from an implementation point of view was the sparse-gridmethod introduced by Smolyak in 1963. We have implemented the sparse-grid methodbased on various quadrature rules, and later extended it to adaptive-sparse grid meth-ods, after noting the key challenges faced by the regular sparse-grid approach. Wetested our sparse-grid implementation by comparing our results against published lit-erature and also competing Monte Carlo methods for three physical problems arisingin nature.

    We find that for smooth integrands of moderate to large dimensions, sparse-gridmethods may be significantly better than Monte Carlo methods. For non-smooth inte-grands, this edge of sparse-grid methods fades, and more efficient adaptive-sparse grid

    implementations may be possible. For large dimensional integrands which are highlydiscontinuous, Monte Carlo methods will also perform poorly because they cannotpossibly sample the whole domain adequately well to capture all the discontinuities.From our results, we have also found that dimension-adaptive sparse grid methodsdo perform better than regular-sparse grids, but only in some situations. Their per-formance in most cases, is comparable to Quasi-Monte Carlo methods. However, inpractice, it is seen that the choice of the numerical scheme for cubature is highlydependent on the nature of the integrand - there is no single ultimate method forcubature that circumvents the drawbacks of the rest.

    REFERENCES

    [1] A. H. Stroud, and D. Secrest, Gaussian Quadrature Formulas, Englewood Cliffs, N.J.,

    Prentice-Hall, 1969.[2] C. W. Clenshaw, A. R. Curtis, A method for numerical integration on an automatic com-

    puter, Numerische Mathematik, 2 (1960), pp. 197-205.[3] L. N. Trefethen, Is Gauss Quadrature Better than Clenshaw-Curtis?, SIAM Review, 50

    (2008), pp. 6787.[4] G. H. Golub, and J. H. Welsch, Calculation of Gauss Quadrature Rules, Mathematics of

    Computation, 23 (1969), pp. 221230.[5] A. S. Kronrod, Nodes and Weights of Quadrature Formulas, Consultants Bureau, New-York,

    1965.

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    20/21

    20

    [6] T. N. L. Patterson, The optimum addition of points to quadrature formulae, Math. Comp.,22 (1968), pp. 847856.

    [7] T. N. L. Patterson, Modified optimal quadrature extensions, Numerische Mathematik, 64(1993), pp. 511520.

    [8] J. Waldvogel, Fast construction of the Fejer and Clenshaw-Curtis quadrature rules, BIT

    Numerical Mathematics, 46 (2006), pp. 195202.[9] W. Gentleman, Implementing the Clenshaw-Curtis quadrature, I - methodology and experi-

    ence, Commun. ACM, 15 (1972), pp. 337342.[10] W. Gentleman, Implementing the Clenshaw-Curtis quadrature, II - computing the cosine

    transform, Commun. ACM, 15 (1972), pp. 343346.[11] R. Piessens, E. de Doncker, C. Uberhuber, and D. Kahaner, QUADPACK - a subroutine

    package for automatic integration, Springer-Verlag, 1993.[12] S. A. Smolyak, Quadrature and interpolation formulas for tensor products of certain classes

    of functions, Soviet Math. Dokl. , 4 (1963), pp. 240243.[13] K. Petras, Fast calculation of coefficients in the Smolyak algorithm, Numerical Algorithms,

    26 (2001), pp. 93103.[14] K. Petras, On the Smolyak cubature error for analytic functions, Advances in Computational

    Mathematics, 12 (2000), pp. 7193.[15] K. Petras, Smolyak cubature of given polynomial degree with few nodes for increasing dimen-

    sion, Numerische Mathematik, 93 (2003), pp. 729753.[16] T. Gerstner, and M. Griebel, Numerical integration using sparse grids, Numerical Algo-

    rithms, 18 (1998), pp. 209232.[17] H. J. Bungartz, and M. Griebel, Sparse Grids, Acta Numerica, 13 (2004), pp. 147269.[18] V. Barthelmann, E. Novak, and K. Ritter, High dimensional polynomial interpolation on

    sparse grids, Advances in Computational Mathematics, 12 (2000), pp. 273288.[19] E. Novak, and K. Ritter, High dimensional integration of smooth functions over cubes,

    Numerische Mathematik, 75 (1996), pp. 7998.[20] E. Novak, and K. Ritter, Simple cubature formulas with high polynomial exactness, Constr.

    Approx., 15 (1999), pp. 499522.[21] M. Holtz, Sparse grid quadrature in high dimensions with applications in finance and insur-

    ance, Ph.D. Thesis, Rheinischen Friedrich-Wilhelms-Universitat Bonn, 2008.[22] G. W. Wasilkowski, and H. Wozniakowski, Explicit cost bounds of algorithms for multi-

    variate tensor product problems, J. Complexity, 11 (1995), pp. 156.[23] O. P. Le Matre and O. M. Knio, Spectral Methods for Uncertainty Quantification, First ed.,

    Scientific Computation Series, Springer, 2010.[24] J. Stoer, and R. Bulirsch, Introduction to Numerical Analysis, Eighth ed., Springer-Verlag,

    New-York, 1980.

    [25]P. Davis, and P. Rabinowitz

    ,Methods of numerical integration

    , Academic Press, 1975.[26] N. Trefethen, Spectral Methods in MATLAB, SIAM, Philadelphia, 2000.[27] C. Zenger, Sparse Grids, SFB Bericht 342/18/90 A, Institut fur Informatik, TU Munchen,

    1990.[28] J. Friedman, Multivariate adaptive regression splines, Annals of Statistics, 19 (1991), pp. 1

    141.[29] T.-X. He, Dimensionality reducing expansion of multivariate integration, Birkhauser, 2001.[30] G. Wahba, Spline models for observational data, SIAM Philadelphia, 1990.[31] J. Garcke, and M. Griebel, Classification with anisotropic sparse grids using simplicial basis

    functions, Intelligent Data Analysis, 6 (2002), pp. 483502.[32] T. Bonk, A new algorithm for multi-dimensional adaptive numerical quadrature, in: Adaptive

    Methods - Algorithms, Theory and Applications, eds. W. Hackbusch and G. Wittum,Vieweg, Braunschweig, 1994, pp. 5468.

    [33] A. Genz, and A. A. Malik, An adaptive algorithm for numerical integration over an n-dimensional rectangular region, J. Comput. Appl. Math., 6 (1980), pp. 295302.

    [34] F. R. T. Nobile, and C. Webster, A sparse grid stochastic collocation method for partialdifferential equations with random input data, SIAM J. Numer. Anal., 46 (2008), pp. 23092345.

    [35] P. van Dooren, and L. de Ridder, An adaptive algorithm for numerical integration over ann-dimensional cube, J. Comp. Appl. Math, 2 (1976), pp. 207217.

    [36] T. Gerstner, and M. Griebel, Dimension-Adaptive Tensor-Product Quadrature, Computing,71 (2003), pp. 6587.

    [37] H. J. Bungartz, and S. Dirnstorfer, Multivariate quadrature on adaptive sparse grids,Computing, 71 (2003), pp. 89114.

    [38] J. D. Jakeman, and S. G. Roberts, Local and dimension adaptive sparse grid interpolation

  • 7/30/2019 Sparse Grid Methods for Multi-Dimensional Integration

    21/21

    21

    and quadrature, CoRR, (2011).[39] J. Berntsen, T. O. Espelid, and A. Genz, An adaptive algorithm for the approximate cal-

    culation of multiple integrals, ACM Trans. Math. Soft., 17 (1991), pp. 437451.[40] C. P. Robert, and G. Casella, Monte Carlo Statistical Methods, Springer Texts in Statistics,

    Second ed., Springer, 2004.

    [41] I. Sloan, and H. Wozniakowski, When are quasi-Monte Carlo algorithms efficient for high-dimensional integrals?, J. Complexity, 14 (1998), pp. 133.

    [42] M. McKay, W. Conover, and R. Beckman, A comparison of three methods for selectingvalues of input variables in the analysis of output from a computer code, Technometrics,21 (1979), pp. 239245.

    [43] W. J. Morokoff, and R. E. Caflisch, Quasi-Monte Carlo Integration, J. Comput. Phys.,122 (1995), pp. 218230.

    [44] T. Hahn, CUBA- a library for multidimensional numerical integration, Comput. Phys. Com-mun., 168 (2005), pp. 7895.

    [45] H. Niederreiter, Quasi-Monte Carlo Methods and Pseudo-Random numbers, SIAM Philadel-phia, 1992.

    [46] I. Sobol, On the distribution of points in a cube and the approximate evaluation of integrals,U.S.S.R. Computational Mathematics and Mathematical Physics, 7 (1967), pp. 86112.

    [47] I. H. Sloan, and S. Joe, Lattice Methods for Multiple Integration, Oxford University Press,Oxford, 1994.

    [48] H. N. Mhaskar, Neural Networks and Approximation Theory, Neural Networks, 9 (1996),

    pp. 711722.[49] R. Schurer, Parallel high-dimensional integration: Quasi-monte carlo versus adaptive cuba-ture rules, in proceedings of: Computational Science - ICCS 2001, International Conference,San Francisco, CA, USA, May 28-30, 2001.

    [50] F. Sprengel, Periodic interpolation and wavelets on sparse grids, Numer. Algorithms, 17(1998), pp. 147169.

    [51] M. Griebel, A parallelizable and vectorizable multi-level algorithm on sparse grids, in: ParallelAlgorithms for Partial Differential Equations, ed. W. Hackbusch, Notes on Numerical FluidMechanics, Vol. 31, Viewweg, Braunschweig, 1991.

    [52] M. Griebel, and G. Zumbusch, Adaptive Sparse Grids for Hyperbolic Conservation Laws, in:Proc. of the 7th Internat. Conf. on Hyperbolic Problems, Birkhauser, Basel, 1998.

    [53] T. Gerstner, Adaptive hierarchical methods for landscape representation and analysis, in:Proc. of the Workshop on Process Modeling and Landform Evolution, eds. S. Hergatenand H. Neugebauer, Springer, Berlin, 1998.