Settings help

CL Name Style:

## 1Scope of the Chapter

This chapter provides functions for the numerical evaluation of definite integrals in one or more dimensions and for evaluating weights and abscissae of integration rules.

## 2Background to the Problems

The functions in this chapter are designed to estimate:
1. (a)the value of a one-dimensional definite integral of the form
 $∫abf(x)dx$ (1)
where $f\left(x\right)$ is defined by you, either at a set of points $\left({x}_{\mathit{i}},f\left({x}_{\mathit{i}}\right)\right)$, for $\mathit{i}=1,2,\dots ,n$, where $a={x}_{1}<{x}_{2}<\cdots <{x}_{n}=b$, or in the form of a function; and the limits of integration $a,b$ may be finite or infinite.
Some methods are specially designed for integrands of the form
 $f(x)=w(x)g(x)$ (2)
which contain a factor $w\left(x\right)$, called the weight-function, of a specific form. These methods take full account of any peculiar behaviour attributable to the $w\left(x\right)$ factor.
2. (b)the value of a multidimensional definite integral of the form
 $∫Rnf(x1,x2,…,xn)dxn⋯dx2dx1$ (3)
where $f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ is a function defined by you and ${R}_{n}$ is some region of $n$-dimensional space.
The simplest form of ${R}_{n}$ is the $n$-rectangle defined by
 $ai≤xi≤bi, i=1,2,…,n$ (4)
where ${a}_{i}$ and ${b}_{i}$ are constants. When ${a}_{i}$ and ${b}_{i}$ are functions of ${x}_{j}$ ($j), the region can easily be transformed to the rectangular form (see page 266 of Davis and Rabinowitz (1975)). Some of the methods described incorporate the transformation procedure.

### 2.1One-dimensional Integrals

To estimate the value of a one-dimensional integral, a quadrature rule uses an approximation in the form of a weighted sum of integrand values, i.e.,
 $∫abf(x)dx≃∑i=1Nwif(xi).$ (5)
The points ${x}_{i}$ within the interval $\left[a,b\right]$ are known as the abscissae, and the ${w}_{i}$ are known as the weights.
More generally, if the integrand has the form (2), the corresponding formula is
 $∫abw(x)g(x)dx≃∑i=1Nwig(xi).$ (6)
If the integrand is known only at a fixed set of points, these points must be used as the abscissae, and the weighted sum is calculated using finite difference methods. However, if the functional form of the integrand is known, so that its value at any abscissa is easily obtained, then a wide variety of quadrature rules are available, each characterised by its choice of abscissae and the corresponding weights.
The appropriate rule to use will depend on the interval $\left[a,b\right]$ – whether finite or otherwise – and on the form of any $w\left(x\right)$ factor in the integrand. A suitable value of $N$ depends on the general behaviour of $f\left(x\right)$; or of $g\left(x\right)$, if there is a $w\left(x\right)$ factor present.
Among possible rules, we mention particularly the Gaussian formulae, which employ a distribution of abscissae which is optimal for $f\left(x\right)$ or $g\left(x\right)$ of polynomial form.
The choice of basic rules constitutes one of the principles on which methods for one-dimensional integrals may be classified. The other major basis of classification is the implementation strategy, of which some types are now presented.
1. (a)Single rule evaluation procedures
A fixed number of abscissae, $N$, is used. This number and the particular rule chosen uniquely determine the weights and abscissae. No estimate is made of the accuracy of the result.
2. (b)Automatic procedures
The number of abscissae, $N$, within $\left[a,b\right]$ is gradually increased until consistency is achieved to within a level of accuracy (absolute or relative) you requested. There are essentially two ways of doing this; hybrid forms of these two methods are also possible:
A series of rules using increasing values of $N$ are successively applied over the whole interval $\left[a,b\right]$. It is clearly more economical if abscissae already used for a lower value of $N$ can be used again as part of a higher-order formula. This principle is known as optimal extension. There is no overlap between the abscissae used in Gaussian formulae of different orders. However, the Kronrod formulae are designed to give an optimal $\left(2N+1\right)$-point formula by adding $\left(N+1\right)$ points to an $N$-point Gauss formula. Further extensions have been developed by Patterson.
The interval $\left[a,b\right]$ is repeatedly divided into a number of sub-intervals, and integration rules are applied separately to each sub-interval. Typically, the subdivision process will be carried further in the neighbourhood of a sharp peak in the integrand than where the curve is smooth. Thus, the distribution of abscissae is adapted to the shape of the integrand.
Subdivision raises the problem of what constitutes an acceptable accuracy in each sub-interval. The usual global acceptability criterion demands that the sum of the absolute values of the error estimates in the sub-intervals should meet the conditions required of the error over the whole interval. Automatic extrapolation over several levels of subdivision may eliminate the effects of some types of singularities.
An ideal general-purpose method would be an automatic method which could be used for a wide variety of integrands, was efficient (i.e., required the use of as few abscissae as possible), and was reliable (i.e., always gave results to within the requested accuracy). Complete reliability is unobtainable, and generally higher reliability is obtained at the expense of efficiency, and vice versa. It must, therefore, be emphasized that the automatic functions in this chapter cannot be assumed to be $100%$ reliable. In general, however, the reliability is very high.

### 2.2Multidimensional Integrals

A distinction must be made between cases of moderately low dimensionality (say, up to $4$ or $5$ dimensions), and those of higher dimensionality. Where the number of dimensions is limited, a one-dimensional method may be applied to each dimension, according to some suitable strategy, and high accuracy may be obtainable (using product rules). However, the number of integrand evaluations rises very rapidly with the number of dimensions, so that the accuracy obtainable with an acceptable amount of computational labour is limited; for example a product of $3$-point rules in $20$ dimensions would require more than ${10}^{9}$ integrand evaluations. Special techniques such as the Monte Carlo methods can be used to deal with high dimensions.
1. (a)Products of one-dimensional rules
Using a two-dimensional integral as an example, we have
 $∫a1b1∫a2b2f(x,y)dy dx≃∑i=1Nwi [∫a2b2f(xi,y)dy]$ (7)
 $∫a1b1∫a2b2f(x,y)dy dx≃∑i=1N∑j=1Nwivjf(xi,yj)$ (8)
where $\left({w}_{i},{x}_{i}\right)$ and $\left({v}_{i},{y}_{i}\right)$ are the weights and abscissae of the rules used in the respective dimensions.
A different one-dimensional rule may be used for each dimension, as appropriate to the range and any weight function present, and a different strategy may be used, as appropriate to the integrand behaviour as a function of each independent variable.
For a rule-evaluation strategy in all dimensions, the formula (8) is applied in a straightforward manner. For automatic strategies (i.e., attempting to attain a requested accuracy), there is a problem in deciding what accuracy must be requested in the inner integral(s). Reference to formula (7) shows that the presence of a limited but random error in the $y$-integration for different values of ${x}_{i}$ can produce a ‘jagged’ function of $x$, which may be difficult to integrate to the desired accuracy and for this reason products of automatic one-dimensional functions should be used with caution (see Lyness (1983)).
2. (b)Monte Carlo methods
These are based on estimating the mean value of the integrand sampled at points chosen from an appropriate statistical distribution function. Usually a variance reducing procedure is incorporated to combat the fundamentally slow rate of convergence of the rudimentary form of the technique. These methods can be effective by comparison with alternative methods when the integrand contains singularities or is erratic in some way, but they are of quite limited accuracy.
3. (c)Number theoretic methods
These are based on the work of Korobov and Conroy and operate by exploiting implicitly the properties of the Fourier expansion of the integrand. Special rules, constructed from so-called optimal coefficients, give a particularly uniform distribution of the points throughout $n$-dimensional space and from their number theoretic properties minimize the error on a prescribed class of integrals. The method can be combined with the Monte Carlo procedure.
4. (d)Sag–Szekeres method
By transformation this method seeks to induce properties into the integrand which make it accurately integrable by the trapezoidal rule. The transformation also allows effective control over the number of integrand evaluations.
5. (e)Sparse grid methods
Given a set of one-dimensional quadrature rules of increasing levels of accuracy, the sparse grid method constructs an approximation to a multidimensional integral using $d$-dimensional tensor products of the differences between rules of adjacent levels. This provides a lower theoretical accuracy than the methods in (a), the full grid approach, which is nonetheless still sufficient for various classes of sufficiently smooth integrands. Furthermore, it requries substantially fewer evaluations than the full grid approach. Specifically, if a one-dimensional quadrature rule has $N\sim \mathit{O}\left({2}^{\ell }\right)$ points, the full grid will require $\mathit{O}\left({2}^{\mathit{ld}}\right)$ function evaluations, whereas the sparse grid of level $\ell$ will require $\mathit{O}\left({2}^{\ell }{d}^{\ell -1}\right)$. Hence a sparse grid approach is computationally feasible even for integrals over $d\sim \mathit{O}\left(100\right)$.
Sparse grid methods are deterministic, and may be viewed as automatic whole domain procedures if their level $\ell$ is allowed to increase.
An automatic adaptive strategy in several dimensions normally involves division of the region into subregions, concentrating the divisions in those parts of the region where the integrand is worst behaved. It is difficult to arrange with any generality for variable limits in the inner integral(s). For this reason, some methods use a region where all the limits are constants; this is called a hyper-rectangle. Integrals over regions defined by variable or infinite limits may be handled by transformation to a hyper-rectangle. Integrals over regions so irregular that such a transformation is not feasible may be handled by surrounding the region by an appropriate hyper-rectangle and defining the integrand to be zero outside the desired region. Such a technique should always be followed by a Monte Carlo method for integration.
The method used locally in each subregion produced by the adaptive subdivision process is usually one of three types: Monte Carlo, number theoretic or deterministic. Deterministic methods are usually the most rapidly convergent but are often expensive to use for high dimensionality and not as robust as the other techniques.

## 3Recommendations on Choice and Use of Available Functions

This section is divided into five subsections. The first subsection illustrates the difference between direct and reverse communication functions. The second subsection highlights the different levels of vectorization provided by different interfaces.
Sections 3.3, 3.3.2 and 3.4 consider in turn functions for: one-dimensional integrals over a finite interval, and over a semi-infinite or an infinite interval; and multidimensional integrals. Within each sub-section, functions are classified by the type of method, which ranges from simple rule evaluation to automatic adaptive algorithms. The recommendations apply particularly when the primary objective is simply to compute the value of one or more integrals, and in these cases the automatic adaptive functions are generally the most convenient and reliable, although also the most expensive in computing time.
Note however that in some circumstances it may be counter-productive to use an automatic function. If the results of the quadrature are to be used in turn as input to a further computation (e.g., an ‘outer’ quadrature or an optimization problem), then this further computation may be adversely affected by the ‘jagged performance profile’ of an automatic function; a simple rule-evaluation function may provide much better overall performance. For further guidance, the article by Lyness (1983) is recommended.

### 3.1Direct and Reverse Communication

Functions in this chapter which evaluate an integral value may be classified as either direct communication or reverse communication. See Section 7 in How to Use the NAG Library for a description of these terms.
Currently in this chapter the only function explicitly using reverse communication is d01rac.

### 3.2Choice of Interface

This section concerns the design of the interface for the provision of abscissae, and the subsequent collection of calculated information, typically integrand evaluations. Vectorized interfaces typically allow for more efficient operation.
1. (a)Single abscissa interfaces
The algorithm will provide a single abscissa at which information is required. These are typically the most simple to use, although they may be significantly less efficient than a vectorized equivalent. Many of the algorithms in this chapter are of this type.
Examples of this include d01fbc.
2. (b)Vectorized abscissae interfaces
The algorithm will return a set of abscissae, at all of which information is required. While these are more complicated to use, they are typically more efficient than a non-vectorized equivalent. They reduce the overhead of function calls, allow the avoidance of repetition of computations common to each of the integrand evaluations, and offer greater scope for vectorization and parallelization of your code. Where possible and practical for the specific algorithm, all future routines will provide a vectorized abscissae interface.
Examples include d01rgc, d01uac, and the functions d01rjc and d01rkc, which are vectorized replacements for d01sjc and d01skc respectively.
3. (c)Multiple integral interfaces
These are functions which allow for multiple integrals to be estimated simultaneously. As with (b) above, these are more complicated to use than single integral functions, however they can provide higher efficiency, particularly if several integrals require the same subcalculations at the same abscissae. They are most efficient if integrals which are supplied together are expected to have similar behaviour over the domain, particularly when the algorithm is adaptive.
d01rac is an example.

### 3.3One-dimensional Integrals

#### 3.3.1Over a Finite Interval

1. (a)Integrand defined at a set of points
If $f\left(x\right)$ is defined numerically at four or more points, then the Gill–Miller finite difference method (d01gac) should be used. The interval of integration is taken to coincide with the range of $x$ values of the points supplied. It is in the nature of this problem that any function may be unreliable. In order to check results independently and so as to provide an alternative technique you may fit the integrand by Chebyshev series using e02adc and then use function e02ajc to evaluate its integral (which need not be restricted to the range of the integration points, as is the case for d01gac). A further alternative is to fit a cubic spline to the data using e02bac and then to evaluate its integral using e02bdc.
2. (b)Integrand defined as a function
If the functional form of $f\left(x\right)$ is known, then one of the following approaches should be taken. They are arranged in the order from most specific to most general, hence the first applicable procedure in the list will be the most efficient. However, if you do not wish to make any assumptions about the integrand, the most reliable functions to use will be d01rjc, d01rkc, d01rlc, d01rgc or d01rac, although these will in general be less efficient for simple integrals.
1. (i)Rule-evaluation functions
If $f\left(x\right)$ is known to be sufficiently well behaved (more precisely, can be closely approximated by a polynomial of moderate degree), a Gaussian function with a suitable number of abscissae may be used.
d01tbc or d01tcc with d01fbc may be used if it is required to examine the weights and abscissae.
d01tbc is faster and more accurate, whereas d01tcc is more general. d01uac uses the same quadrature rules as d01tbc, and may be used if you do not explicitly require the weights and abscissae.
If $f\left(x\right)$ is well behaved, apart from a weight-function of the form
 $|x-a+b2| c or (b-x)c(x-a)d,$
d01tcc with d01fbc may be used.
d01tbc and d01tcc generate weights and abscissae for specific Gauss rules. Weights and abscissae for other quadrature formulae may be computed using functions d01tdc or d01tec. Wherever possible use d01tdc in preference to d01tec. The former however requires information that may not be readily available.
2. (ii)Automatic whole-interval functions
If $f\left(x\right)$ is reasonably smooth, and the required accuracy is not too high, the automatic whole interval function d01bdc may be used. Additionally, d01esc with $d=1$ may be used with an appropriate transformation from the unit interval.
d01bdc uses the Gauss $10$-point rule, with the $21$ point Kronrod extension, and the subsequent $43$ and $87$ point Patterson extensions if required.
d01esc supports multiple simultaneous integrals, and has a vectorized interface. Either high order Gauss–Patterson rules (of size ${2}^{\ell }-1$, for $\ell =1,\dots ,9$), or high order Clenshaw-Curtis rules (of size ${2}^{\ell -1}+1$, for $\ell =2,\dots ,12$). Gauss–Patterson rules possess greater polynomial accuracy, whereas Clenshaw–Curtis rules are often well suited to oscillatory integrals.
Firstly, several functions are available for integrands of the form $w\left(x\right)g\left(x\right)$ where $g\left(x\right)$ is a ‘smooth’ function (i.e., has no singularities, sharp peaks or violent oscillations in the interval of integration) and $w\left(x\right)$ is a weight function of one of the following forms.
1. 1.if $w\left(x\right)={\left(b-x\right)}^{\alpha }{\left(x-a\right)}^{\beta }{\left(\mathrm{log}\left(b-x\right)\right)}^{k}{\left(\mathrm{log}\left(x-a\right)\right)}^{l}$, where $k,l=0$ or $1$, $\alpha ,\beta >-1$: use d01spc;
2. 2.if $w\left(x\right)=\frac{1}{x-c}$: use d01sqc (this integral is called the Hilbert transform of $g$);
3. 3.if $w\left(x\right)=\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$: use d01snc (this function can also handle certain types of singularities in $g\left(x\right)$).
Secondly, there are multiple routines for general $f\left(x\right)$, using different strategies.
d01rjc and d01rkc use the strategy of Piessens et al. (1983), using repeated bisection of the interval, and in the first case the $\epsilon$-algorithm (Wynn (1956)), to improve the integral estimate. This can cope with singularities away from the end points, provided singular points do not occur as abscissae, d01rkc tends to perform better than d01rjc on more oscillatory integrals.
d01rlc uses the same subdivision strategy as d01rjc over a set of initial interval segments determined by supplied break-points. It is hence suitable for integrals with discontinuities (including switches in definition) or sharp peaks occuring at known points. Such integrals may also be approximated using other functions which do not allow break-points, although such integrals should be evaluated over each of the sub-intervals seperately.
d01rac again uses the strategy of Piessens et al. (1983), and provides the functionality of d01rjc, d01rkc and d01rlcin a reverse communication framework. It also supports multiple integrals and uses a vectorized interface for the abscissae. Hence it is likely to be more efficient if several similar integrals are required to be evaluated over the same domain. Furthermore, its behaviour can be tailored through the use of optional parameters.
d01rgc uses another adaptive scheme due to Gonnet (2010). This attempts to match the quadrature rule to the underlying integrand as well as subdividing the domain. Further, it can explicitly deal with singular points at abscissae, should NaN's or ∞ be returned by the user-supplied function, provided the generation of these does not cause the program to halt (see Chapter X07).

#### 3.3.2Over a Semi-infinite or Infinite Interval

1. (a)Integrand defined at a set of points
If $f\left(x\right)$ is defined numerically at four or more points, and the portion of the integral lying outside the range of the points supplied may be neglected, then the Gill–Miller finite difference method, d01gac, should be used.
2. (b)Integrand defined as a function
1. (i)Rule evaluation functions
If $f\left(x\right)$ behaves approximately like a polynomial in $x$, apart from a weight function of the form:
1. 1.${e}^{-\beta x},\beta >0$ (semi-infinite interval, lower limit finite); or
2. 2.${e}^{-\beta x},\beta <0$ (semi-infinite interval, upper limit finite); or
3. 3.${e}^{-\beta {\left(x-\alpha \right)}^{2}},\beta >0$ (infinite interval),
or if $f\left(x\right)$ behaves approximately like a polynomial in ${\left(x+b\right)}^{-1}$ (semi-infinite range), then the Gaussian functions may be used.
d01uac may be used if it is not required to examine the weights and abscissae.
d01tbc or d01tcc with d01fbc may be used if it is required to examine the weights and abscissae.
d01tbc is faster and more accurate, whereas d01tcc is more general.
d01ubc returns an approximation to the specific problem .
d01rmc may be used, except for integrands which decay slowly towards an infinite end point, and oscillate in sign over the entire range. For this class, it may be possible to calculate the integral by integrating between the zeros and invoking some extrapolation process.
d01ssc may be used for integrals involving weight functions of the form $\mathrm{cos}\left(\omega x\right)$ and $\mathrm{sin}\left(\omega x\right)$ over a semi-infinite interval (lower limit finite).
The following alternative procedures are mentioned for completeness, though their use will rarely be necessary.
1. 1.If the integrand decays rapidly towards an infinite end point, a finite cut-off may be chosen, and the finite range methods applied.
2. 2.If the only irregularities occur in the finite part (apart from a singularity at the finite limit, with which d01rmc can cope), the range may be divided, with d01rmc used on the infinite part.
3. 3.A transformation to finite range may be employed, e.g.,
 $x= 1-tt or x=- loge⁡t$
will transform $\left(0,\infty \right)$ to $\left(1,0\right)$ while for infinite ranges we have
 $∫-∞∞f(x)dx=∫0∞(f(x)+f(-x))dx.$
If the integrand behaves badly on $\left(-\infty ,0\right)$ and well on $\left(0,\infty \right)$ or vice versa it is better to compute it as $\underset{-\infty }{\overset{0}{\int }}f\left(x\right)dx+\underset{0}{\overset{\infty }{\int }}f\left(x\right)dx$. This saves computing unnecessary function values in the semi-infinite range where the function is well behaved.

### 3.4Multidimensional Integrals

A number of techniques are available in this area and the choice depends to a large extent on the dimension and the required accuracy. It can be advantageous to use more than one technique as a confirmation of accuracy, particularly for high-dimensional integrations. Several functions include a transformation procedure, using a user-supplied function, which allows general product regions to be easily dealt with in terms of conversion to the standard $n$-cube region.
1. (a)Products of one-dimensional rules (suitable for up to about $5$ dimensions)
If $f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ is known to be a sufficiently well behaved function of each variable ${x}_{i}$, apart possibly from weight functions of the types provided, a product of Gaussian rules may be used. These are provided by d01tbc or d01tcc with d01fbc. Rules for finite, semi-infinite and infinite ranges are included.
For two-dimensional integrals only, unless the integrand is very badly behaved, the automatic whole-interval product procedure of d01dac may be used. The limits of the inner integral may be user-specified functions of the outer variable. Infinite limits may be handled by transformation (see Section 3.3.2); end point singularities introduced by transformation should not be troublesome, as the integrand value will not be required on the boundary of the region.
If none of these functions proves suitable and convenient, the one-dimensional functions may be used recursively. For example, the two-dimensional integral
 $I=∫a1b1∫a2b2f(x,y)dy dx$
may be expressed as
 $I=∫a1b1 F(x)dx, where F(x)=∫a2b2 f(x,y)dy.$
The user-supplied code to evaluate $F\left(x\right)$ will call the integration function for the $y$-integration, which will call more user-supplied code for $f\left(x,y\right)$ as a function of $y$ ($x$ being effectively a constant).
The reverse communication function d01rac may be used by itself in a pseudo-recursive manner, in that it may be called to evaluate an inner integral for the integrand value of an outer integral also being calculated by d01rac.
2. (b)Sag–Szekeres method
d01fdc is particularly suitable for integrals of very large dimension although the accuracy is generally not high. It allows integration over either the general product region (with built-in transformation to the $n$-cube) or the $n$-sphere. Although no error estimate is provided, two adjustable arguments may be varied for checking purposes or may be used to tune the algorithm to particular integrals.
3. (c)Number Theoretic method
Algorithms of this type carry out multidimensional integration using the Korobov–Conroy method over a product region with built-in transformation to the $n$-cube. A stochastic modification of this method is incorporated into the functions in this Library, hybridising the technique with the Monte Carlo procedure. An error estimate is provided in terms of the statistical standard error. A number of pre-computed optimal coefficient rules for up to $20$ dimensions are provided; others can be computed using d01gyc and d01gzc. Like the Sag–Szekeres method it is suitable for large dimensional integrals although the accuracy is not high.
d01gdc has a vectorized interface which can result in faster execution, especially on vector-processing machines. You are required to provide two functions, the first to return an array of values of the integrand at each of an array of points, and the second to evaluate the limits of integration at each of an array of points. This reduces the overhead of function calls, avoids repetitions of computations common to each of the evaluations of the integral and limits of integration, and offers greater scope for vectorization of your code.
4. (d)A combinatorial extrapolation method
d01pac computes a sequence of approximations and an error estimate to the integral of a function over a multidimensional simplex using a combinatorial method with extrapolation.
5. (e)Sparse Grid method
d01esc implements a sparse grid quadrature scheme for the integration of a vector of multidimensional integrals over the unit hypercube,
 $F ≈ ∫ [0,1] d f(x) dx .$
The function uses a vectorized interface, which returns a set of points at which the integrands must be evaluated in a sparse storage format for efficiency.
Other domains can be readily integrated over by using an appropriate mapping inside the provided function for evaluating the integrands. It is suitable for $d$ up to $\mathit{O}\left(100\right)$, although no upper bound on the number of dimensions is enforced. It will also evaluate one-dimensional integrals, although in this case the sparse grid used is in fact the full grid.
The function uses optional parameters, set and queried using the functions d01zkc and d01zlc respectively. Amongst other options, these allow the parallelization of the function to be controlled.
6. (f)Automatic functions (d01wcc and d01xbc)
Both functions are for integrals of the form
 $∫a1b1 ∫a2b2 ⋯ ∫anbn f(x1,x2,…,xn)dxndxn-1⋯dx1.$
d01xbc is an adaptive Monte Carlo function. This function is usually slow and not recommended for high-accuracy work. It is a robust function that can often be used for low-accuracy results with highly irregular integrands or when $n$ is large.
d01wcc is an adaptive deterministic function. Convergence is fast for well behaved integrands. Highly accurate results can often be obtained for $n$ between $2$ and $5$, using significantly fewer integrand evaluations than would be required by the Monte Carlo function d01xbc. The function will usually work when the integrand is mildly singular and for $n\le 10$ should be used before d01xbc. If it is known in advance that the integrand is highly irregular, it is best to compare results from at least two different functions.
There are many problems for which one or both of the functions will require large amounts of computing time to obtain even moderately accurate results. The amount of computing time is controlled by the number of integrand evaluations you have allowed, and you should set this argument carefully, with reference to the time available and the accuracy desired.

## 4Decision Trees

### Tree 1: One-dimensional integrals over a finite interval

 Is the functional form of the integrand known? Do you require reverse communication? d01rac yes yes no no Are you concerned with efficiency for simple integrals? Is the integrand smooth (polynomial-like) apart from weight function ${|x-\left(a+b\right)/2|}^{c}$ or ${\left(b-x\right)}^{c}{\left(x-a\right)}^{d}$? d01uac, d01fbc or d01gdc yes yes no no Is the integrand reasonably smooth and the required accuracy not too great? d01bdc or d01uac, or possibly d01esc yes no Are multiple integrands to be integrated simultaneously? d01rac or possibly d01esc yes no Has the integrand discontinuities, sharp peaks or singularities at known points other than the end points? Split the range and begin again; or use d01rgc or d01rlc yes no Is the integrand free of singularities, sharp peaks and violent oscillations apart from weight function ${\left(b-x\right)}^{\alpha }{\left(x-a\right)}^{\beta }\phantom{\rule{0ex}{0ex}}{\left(\mathrm{log}\left(b-x\right)\right)}^{k}{\left(\mathrm{log}\left(x-a\right)\right)}^{l}$? d01spc yes no Is the integrand free of singularities, sharp peaks and violent oscillations apart from weight function $\frac{1}{x-c}$? d01sqc yes no Is the integrand free of violent oscillations apart from weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$? d01snc yes no Is the integrand free of singularities? d01rjc, d01rkc or d01uac, or possibly d01esc yes no d01rac, d01rgc or d01rjc d01rac, d01rgc or d01rjc d01gac

### Tree 2: One-dimensional integrals over a semi-infinite or infinite interval

 Is the functional form of the integrand known? Are you concerned with efficiency for simple integrands? Is the integrand smooth (polynomial-like) with no exceptions? d01uac, d01bdc, or d01esc with transformation. See Section 3.3.2(b)(ii). yes yes yes no no no Is the integrand of the form ? d01ubc yes no Is the integrand smooth (polynomial-like) apart from weight function ${e}^{-\beta \left(x\right)}$ (semi-infinite range) or ${e}^{{-\beta \left(x-a\right)}^{2}}$ (infinite range) or is the integrand polynomial-like in $\frac{1}{x+b}$? (semi-infinite range)? d01uac or d01fbc yes no Has integrand discontinuities, sharp peaks or singularities at known points other than a finite limit? Split range; begin again using finite or infinite range tree yes no Does the integrand oscillate over the entire range? Does the integrand decay rapidly towards an infinite limit? Use d01rmc; or set cutoff and use finite range tree yes yes no no Is the integrand free of violent oscillations apart from weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$ (semi-infinite range)? d01ssc yes no Use finite-range integration between the zeros and extrapolate. d01rmc d01rmc d01gac (integrates over the range of the points supplied)

### Tree 3: Multidimensional integrals

 Is dimension $\text{}=2$ and product region? d01dac yes no Is dimension $\text{}\le 4$ Is region an $n$-sphere? d01fbc with user transformation. yes yes no no Is region a Simplex? d01fbc with user transformation or d01pac yes no Is the integrand smooth (polynomial-like) in each dimension apart from weight function? d01fbc yes no Is integrand free of extremely bad behaviour? d01esc, d01fdc, d01gdc or d01wcc yes no Is bad behaviour on the boundary? d01fdc or d01wcc yes no Compare results from at least two of d01esc, d01fdc, d01gdc, d01wcc and d01xbc and one-dimensional recursive application Is region an $n$-sphere? d01fdc yes no Is region a Simplex? d01pac yes no Is high accuracy required? d01fdc with argument tuning yes no Is dimension high? d01esc, d01fdc, d01gdc or d01xbc yes no d01wcc
Note: d01fbc may require the use of d01tbc, d01tcc or d01tdc to calculate the weights and abscissae for each dimension (d01tdc may require use of d01tec).

## 5Functionality Index

 Korobov optimal coefficients for use in d01gdc:
 when number of points is a product of $2$ primes d01gzc
 when number of points is prime d01gyc
 over a finite two-dimensional region d01dac
 over a general product region,
 Korobov–Conroy number-theoretic method d01gdc
 Sag–Szekeres method (also over $n$-sphere) d01fdc
 over a hyper-rectangle,
 Monte Carlo method d01xbc
 sparse grid method (with user transformation),
 muliple integrands, vectorized interface d01esc
 over an $n$-simplex d01pac
 adaptive integration of a function over a finite interval,
 strategy due to Gonnet,
 vectorized interface d01rgc
 strategy due to Piessens and de Doncker,
 allowing for singularities at user-specified break-points d01rlc
 suitable for badly behaved integrands d01rjc
 suitable for highly oscillatory integrals d01rkc
 weight function $1/\left(x-c\right)$ Cauchy principal value (Hilbert transform) d01sqc
 weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$ d01snc
 weight function with end-point singularities of algebraico-logarithmic type d01spc
 adaptive integration of a function over a infinite or semi-infinite interval,
 strategy due to Piessens and de Doncker d01rmc
 adaptive integration of a function over an infinite interval or semi-infinite interval,
 weight function $\mathrm{cos}\left(\omega x\right)$ or $\mathrm{sin}\left(\omega x\right)$ d01ssc
 integration of a function defined by data values only,
 Gill–Miller method d01gac
 non-adaptive integration over a finite, semi-infinite or infinite interval,
 using pre-computed weights and abscissae
 single abscissae interface d01tac
 specific integral with weight $\mathrm{exp}\left({-x}^{2}\right)$ over semi-infinite interval d01ubc
 vectorized interface d01uac
 non-adaptive integration over a finite interval d01bdc
 reverse communication,
 adaptive integration over a finite interval,
 multiple integrands,
 efficient on vector machines d01rac
 Service functions,
 array size query for d01rac d01rcc
 general option getting d01zlc
 general option setting and initialization d01zkc
 Weights and abscissae for Gaussian quadrature rules,
 method of Golub and Welsch,
 calculating the weights and abscissae d01tdc
 generate recursive coefficients d01tec
 more general choice of rule,
 calculating the weights and abscissae d01tcc
 restricted choice of rule,
 using pre-computed weights and abscissae d01tbc

None.

## 7 Withdrawn or Deprecated Functions

The following lists all those functions that have been withdrawn since Mark 24 of the Library or are in the Library, but deprecated.
Function Status Replacement Function(s)
d01ajc Withdrawn at Mark 24 d01rjc
d01akc Withdrawn at Mark 24 d01rkc
d01alc Withdrawn at Mark 24 d01rlc
d01amc Withdrawn at Mark 24 d01rmc
d01anc Withdrawn at Mark 24 d01snc
d01apc Withdrawn at Mark 24 d01spc
d01aqc Withdrawn at Mark 24 d01sqc
d01asc Withdrawn at Mark 24 d01ssc
d01bac Withdrawn at Mark 24 d01uac or d01uac
d01fcc Withdrawn at Mark 25 d01wcc
d01gbc Withdrawn at Mark 25 d01xbc
d01sjc To be withdrawn at Mark 31.3 d01rjc
d01skc To be withdrawn at Mark 31.3 d01rkc
d01slc To be withdrawn at Mark 31.3 d01rlc
d01smc To be withdrawn at Mark 31.3 d01rmc
d01tac Withdrawn at Mark 28.3 d01uac

## 8References

Davis P J and Rabinowitz P (1975) Methods of Numerical Integration Academic Press
Gonnet P (2010) Increasing the reliability of adaptive quadrature using explicit interpolants ACM Trans. Math. software 37 26
Lyness J N (1983) When not to use an automatic quadrature routine SIAM Rev. 25 63–87
Patterson T N L (1968) The Optimum addition of points to quadrature formulae Math. Comput. 22 847–856
Piessens R, de Doncker–Kapenga E, Überhuber C and Kahaner D (1983) QUADPACK, A Subroutine Package for Automatic Integration Springer–Verlag
Sobol I M (1974) The Monte Carlo Method The University of Chicago Press
Stroud A H (1971) Approximate Calculation of Multiple Integrals Prentice–Hall
Wynn P (1956) On a device for computing the ${e}_{m}\left({S}_{n}\right)$ transformation Math. Tables Aids Comput. 10 91–96