# NAG Library Chapter Introduction

## 1Scope of the Chapter

This chapter is concerned with the provision of some commonly occurring physical and mathematical functions.

## 2Background to the Problems

The majority of the functions in this chapter approximate real-valued functions of a single real argument, and the techniques involved are described in Section 2.1. In addition the chapter contains functions for elliptic integrals (see Section 2.2), Bessel and Airy functions of a complex argument (see Section 2.3), complementary error function of a complex argument, hypergeometric functions and various option pricing functions for use in financial applications.

### 2.1Functions of a Single Real Argument

Most of the functions provided for functions of a single real argument have been based on truncated Chebyshev expansions. This method of approximation was adopted as a compromise between the conflicting requirements of efficiency and ease of implementation on many different machine ranges. For details of the reasons behind this choice and the production and testing procedures followed in constructing this chapter see Schonfelder (1976).
Basically, if the function to be approximated is $f\left(x\right)$, then for $x\in \left[a,b\right]$ an approximation of the form
 $fx=gx∑′r=0CrTrt$
is used (${\sum }^{\prime }$ denotes, according to the usual convention, a summation in which the first term is halved), where $g\left(x\right)$ is some suitable auxiliary function which extracts any singularities, asymptotes and, if possible, zeros of the function in the range in question and $t=t\left(x\right)$ is a mapping of the general range $\left[a,b\right]$ to the specific range [$-1,+1$] required by the Chebyshev polynomials, ${T}_{r}\left(t\right)$. For a detailed description of the properties of the Chebyshev polynomials see Clenshaw (1962) and Fox and Parker (1968).
The essential property of these polynomials for the purposes of function approximation is that ${T}_{n}\left(t\right)$ oscillates between $±1$ and it takes its extreme values $n+1$ times in the interval [$-1,+1$]. Therefore, provided the coefficients ${C}_{r}$ decrease in magnitude sufficiently rapidly the error made by truncating the Chebyshev expansion after $n$ terms is approximately given by
 $Et≃CnTnt.$
That is, the error oscillates between $±{C}_{n}$ and takes its extreme value $n+1$ times in the interval in question. Now this is just the condition that the approximation be a minimax representation, one which minimizes the maximum error. By suitable choice of the interval, [$a,b$], the auxiliary function, $g\left(x\right)$, and the mapping of the independent variable, $t\left(x\right)$, it is almost always possible to obtain a Chebyshev expansion with rapid convergence and hence truncations that provide near minimax polynomial approximations to the required function. The difference between the true minimax polynomial and the truncated Chebyshev expansion is seldom sufficiently great enough to be of significance.
The evaluation of the Chebyshev expansions follows one of two methods. The first and most efficient, and hence the most commonly used, works with the equivalent simple polynomial. The second method, which is used on the few occasions when the first method proves to be unstable, is based directly on the truncated Chebyshev series, and uses backward recursion to evaluate the sum. For the first method, a suitably truncated Chebyshev expansion (truncation is chosen so that the error is less than the machine precision) is converted to the equivalent simple polynomial. That is, we evaluate the set of coefficients ${b}_{r}$ such that
 $yt=∑r=0 n-1brtr=∑′r=0 n-1CrTrt.$
The polynomial can then be evaluated by the efficient Horner's method of nested multiplications,
 $yt=b0+tb1+tb2+…tbn- 2+tbn- 1….$
This method of evaluation results in efficient functions but for some expansions there is considerable loss of accuracy due to cancellation effects. In these cases the second method is used. It is well known that if
 $bn-1=Cn-1 bn-2=2tbn-1+Cn-2 bj-0=2tbj+1-bj+2+Cj, j=n-3,n-4,…,0$
then
 $∑′r=0 CrTrt=12b0-b2$
and this is always stable. This method is most efficiently implemented by using three variables cyclically and explicitly constructing the recursion.
That is,
 $α = Cn-1 β = 2tα+Cn-2 γ = 2tβ-α+Cn-3 α = 2tγ-β+Cn-4 β = 2tα-γ+Cn-5 ⋮ say ​α = 2tγ-β+C2 β = 2tα-γ+C1 yt = tβ-α+12C0$
The auxiliary functions used are normally functions compounded of simple polynomial (usually linear) factors extracting zeros, and the primary compiler-provided functions, sin, cos, ln, exp, sqrt, which extract singularities and/or asymptotes or in some cases basic oscillatory behaviour, leaving a smooth well-behaved function to be approximated by the Chebyshev expansion which can therefore be rapidly convergent.
The mappings of [$a,b$] to [$-1,+1$] used range from simple linear mappings to the case when $b$ is infinite, and considerable improvement in convergence can be obtained by use of a bilinear form of mapping. Another common form of mapping is used when the function is even; that is, it involves only even powers in its expansion. In this case an approximation over the whole interval [$-a,a$] can be provided using a mapping $t=2{\left(x/a\right)}^{2}-1$. This embodies the evenness property but the expansion in $t$ involves all powers and hence removes the necessity of working with an expansion with half its coefficients zero.
For many of the functions an analysis of the error in principle is given, namely, if $E$ and $\nabla$ are the absolute errors in function and argument and $\epsilon$ and $\delta$ are the corresponding relative errors, then
 $E ≃ f′x∇ E ≃ xf′xδ ε ≃ x f′ x fx δ.$
If we ignore errors that arise in the argument of the function by propagation of data errors, etc., and consider only those errors that result from the fact that a real number is being represented in the computer in floating-point form with finite precision, then $\delta$ is bounded and this bound is independent of the magnitude of $x$. For example, on an $11$-digit machine
 $δ≤10-11.$
(This of course implies that the absolute error $\nabla =x\delta$ is also bounded but the bound is now dependent on $x$.) However, because of this the last two relations above are probably of more interest. If possible the relative error propagation is discussed; that is, the behaviour of the error amplification factor $\left|x{f}^{\prime }\left(x\right)/f\left(x\right)\right|$ is described, but in some cases, such as near zeros of the function which cannot be extracted explicitly, absolute error in the result is the quantity of significance and here the factor $\left|x{f}^{\prime }\left(x\right)\right|$ is described. In general, testing of the functions has shown that their error behaviour follows fairly well these theoretical error behaviours. In regions where the error amplification factors are less than or of the order of one, the errors are slightly larger than the above predictions. The errors are here limited largely by the finite precision of arithmetic in the machine, but $\epsilon$ is normally no more than a few times greater than the bound on $\delta$. In regions where the amplification factors are large, of order ten or greater, the theoretical analysis gives a good measure of the accuracy obtainable.
It should be noted that the definitions and notations used for the functions in this chapter are all taken from Abramowitz and Stegun (1972). You are strongly recommended to consult this book for details before using the functions in this chapter. An excellent on-line reference for special functions is the NIST Digital Library of Mathematical Functions.

### 2.2Approximations to Elliptic Integrals

Four functions provided here are symmetrised variants of the classical (Legendre) elliptic integrals. These alternative definitions have been suggested by Carlson (1965), Carlson (1977b) and Carlson (1977a) and he also developed the basic algorithms used in this chapter.
The symmetrised elliptic integral of the first kind is represented by
 $RF x,y,z = 12 ∫0∞ dt t+x t+y t+z ,$
where $x,y,z\ge 0$ and at most one may be equal to zero.
The normalization factor, $\frac{1}{2}$, is chosen so as to make
 $RFx,x,x=1/x.$
If any two of the variables are equal, ${R}_{F}$ degenerates into the second function
 $RC x,y = RF x,y,y = 12 ∫0∞ dt t+y . t+x ,$
where the argument restrictions are now $x\ge 0$ and $y\ne 0$.
This function is related to the logarithm or inverse hyperbolic functions if $0, and to the inverse circular functions if $0\le x\le y$.
The symmetrised elliptic integral of the second kind is defined by
 $RD x,y,z = 32 ∫0∞ dt t+x t+y t+z3$
with $z>0$, $x\ge 0$ and $y\ge 0$, but only one of $x$ or $y$ may be zero.
The function is a degenerate special case of the symmetrised elliptic integral of the third kind
 $RJ x,y,z,ρ = 32 ∫0∞ dt t+x t+y t+z t+ρ$
with $\rho \ne 0$ and $x,y,z\ge 0$ with at most one equality holding. Thus ${R}_{D}\left(x,y,z\right)={R}_{J}\left(x,y,z,z\right)$. The normalization of both these functions is chosen so that
 $RDx,x,x=RJx,x,x,x=1/x⁢x.$
The algorithms used for all these functions are based on duplication theorems. These allow a recursion system to be established which constructs a new set of arguments from the old using a combination of arithmetic and geometric means. The value of the function at the original arguments can then be simply related to the value at the new arguments. These recursive reductions are used until the arguments differ from the mean by an amount small enough for a Taylor series about the mean to give sufficient accuracy when retaining terms of order less than six. Each step of the recurrences reduces the difference from the mean by a factor of four, and as the truncation error is of order six, the truncation error goes like ${\left(4096\right)}^{-n}$, where $n$ is the number of iterations.
The above forms can be related to the more traditional canonical forms (see Section 17.2 of Abramowitz and Stegun (1972)), as follows.
If we write $q={\mathrm{cos}}^{2}\varphi ,r=1-m {\mathrm{sin}}^{2}\varphi ,s=1-n {\mathrm{sin}}^{2}\varphi$, where $0\le \varphi \le \frac{1}{2}\pi$, we have
the classical elliptic integral of the first kind:
 $Fϕ∣m = ∫0ϕ 1-m sin2⁡θ -12 dθ = sin⁡ϕ RF q,r,1 ;$
the classical elliptic integral of the second kind:
 $Eϕ∣m = ∫0ϕ 1-m sin2⁡θ 12 dθ = sin⁡ϕ RF q,r,1 -13m sin3 ϕ RD q,r,1$
the classical elliptic integral of the third kind:
 $Πn; ϕ∣m = ∫0ϕ 1-n sin2⁡θ -1 1-m sin2⁡θ -12 dθ = sin⁡ϕ RF q,r,1 + 13 n sin3 ϕ RJ q,r,1,s .$
Also the classical complete elliptic integral of the first kind:
 $Km = ∫ 0 π2 1 - m sin2⁡θ -12 dθ = RF 0,1-m,1 ;$
the classical complete elliptic integral of the second kind:
 $Em = ∫ 0 π2 1-m sin2 θ 12 dθ = RF 0,1-m,1 - 13 m RD 0,1-m,1 .$
For convenience, Chapter s contains functions to evaluate classical and symmetrised elliptic integrals.

### 2.3Bessel and Airy Functions of a Complex Argument

The functions for Bessel and Airy functions of a real argument are based on Chebyshev expansions, as described in Section 2.1. The functions provided for functions of a complex argument, however, use different methods. These functions relate all functions to the modified Bessel functions ${I}_{\nu }\left(z\right)$ and ${K}_{\nu }\left(z\right)$ computed in the right-half complex plane, including their analytic continuations. ${I}_{\nu }$ and ${K}_{\nu }$ are computed by different methods according to the values of $z$ and $\nu$. The methods include power series, asymptotic expansions and Wronskian evaluations. The relations between functions are based on well known formulae (see Abramowitz and Stegun (1972)).

### 2.4Option Pricing Functions

The option pricing functions evaluate the closed form solutions or approximations to the equations that define mathematical models for the prices of selected financial option contracts. These solutions can be viewed as special functions determined by the underlying equations. The terminology associated with these functions arises from their setting in financial markets and is briefly outlined below. See Joshi (2003) for a comprehensive introduction to this subject. An option is a contract which gives the holder the right, but not the obligation, to buy (if it is a call) or sell (if it is a put) a particular asset, $S$. A European option can be exercised only at the specified expiry time, $T$, while an American option can be exercised at any time up to $T$. For Asian options the average underlying price over a pre-set time period determines the payoff.
The asset is bought (if a call) or sold (if a put) at a pre-specified strike price $X$. Thus, an option contract has a payoff to the holder of $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{\left({S}_{T}-X\right),0\right\}$ for a call or $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{\left(X-{S}_{T}\right),0\right\}$, for a put, which depends on whether the asset price at the time of exercise is above (call) or below (put) the strike, $X$. If at any moment in time a contract is currently showing a theoretical profit then it is deemed ‘in-the-money’; otherwise it is deemed ‘out-of-the-money’.
The option contract itself therefore has a value and, in many cases, can be traded in markets. Mathematical models (e.g., Black–Scholes, Merton, Vasicek, Hull–White, Heston, CEV, SABR, …) give theoretical prices for particular option contracts using a number of assumptions about the behaviour of financial markets. Typically the price ${S}_{t}$ of the underlying asset at time $t$ is modelled as the solution of a stochastic differential equation (SDE). Depending on the complexity of this equation, the model may admit closed form formulae for the prices of various options. The options described in this chapter introduction are detailed below. We let $𝔼$ denote expectation with respect to the risk neutral measure and we define ${𝕀}_{A}$ to be $1$ on the set $A$ and $0$ otherwise.
The price of a standard European call option is $𝔼\left({e}^{-rT}\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{S}_{T}-X,0\right\}\right)$ and the price of a standard European put option is $𝔼\left({e}^{-rT}\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{X-{S}_{T},0\right\}\right)$.
For continuously averaged geometric Asian options define
 $GT = exp ∫ 0 T logSt dt .$
Then the price of an Asian call option is $𝔼\left({e}^{-rT}\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{G\left(T\right)-X,0\right\}\right)$ and the price of an Asian put option is $𝔼\left({e}^{-rT}\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{X-G\left(T\right),0\right\}\right)$.
For a binary asset-or-nothing option the price of a call is $𝔼\left({e}^{-rT}{S}_{T}{𝕀}_{\left\{{S}_{T}>X\right\}}\right)$ and the price of a put is $𝔼\left({e}^{-rT}{S}_{T}{𝕀}_{\left\{{S}_{T}.
For a binary cash-or-nothing option the price of a call is $𝔼\left({e}^{-rT}X{𝕀}_{\left\{{S}_{T}>X\right\}}\right)$ and the price of a put is $𝔼\left({e}^{-rT}X{𝕀}_{\left\{{S}_{T}.
For a floating-strike lookback option the price of a call is $𝔼\left({e}^{-rT}\left({S}_{T}-{\mathrm{min}}_{0\le t\le T}{S}_{t}\right)\right)$ and the price of a put is $𝔼\left({e}^{-rT}\left({\mathrm{max}}_{0\le t\le T}{S}_{t}-{S}_{T}\right)\right)$.
For an up-and-in barrier option with barrier level $H$ and cash rebate $K$, set $A=\left\{{\mathrm{max}}_{0\le t\le T}{S}_{t}>H\right\}$. Then the price of a call is
 $𝔼 e-rT maxST-X,0 𝕀A + e-rT K 1 - 𝕀A$
and the price of a put is
 $𝔼 e-rT maxX-ST,0 𝕀A+ e-rT K 1-𝕀A$
For a down-and-in barrier option with barrier level $H$ and cash rebate $K$, set $A=\left\{{\mathrm{min}}_{0\le t\le T}{S}_{t}. Then the price of a call is
 $𝔼 e-rT maxST-X,0 𝕀A + e -rT K 1-𝕀A$
and the price of a put is
 $𝔼 e-rT max X-ST,0 𝕀A + e-rT K 1-𝕀A$
For an up-and-out barrier option with barrier level $H$ and cash rebate $K$, set $A=\left\{{\mathrm{max}}_{0\le t\le T}{S}_{t}>H\right\}$. Then the price of a call is
 $𝔼 e-rT maxST-X,0 1-𝕀A + e-rT K 𝕀A$
and the price of a put is
 $𝔼 e-rT maxX-ST,0 1-𝕀A + e-rT K 𝕀A$
For a down-and-out barrier option with barrier level $H$ and cash rebate $K$, set $A=\left\{{\mathrm{min}}_{0\le t\le T}{S}_{t}. Then the price of a call is
 $𝔼 e-rT maxST-X,0 1-𝕀A + e-rT K 𝕀A$
and the price of a put is
 $𝔼 e-rT maxX-ST,0 1-𝕀A + e-rT K 𝕀A$
The price of an American call option is $\mathrm{ess}{\mathrm{sup}}_{0\le \tau \le T}𝔼\left({e}^{-r\tau }\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{S}_{\tau }-X,0\right\}\right)$ and the price of an American put option is $\mathrm{ess}{\mathrm{sup}}_{0\le \tau \le T}𝔼\left({e}^{-r\tau }\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{X-{S}_{\tau },0\right\}\right)$. Here $\mathrm{ess}{\mathrm{sup}}_{0\le \tau \le T}$ denotes the essential supremum over all stopping times $\tau$ for the process $S$ which take values in $\left[0,T\right]$. If $S$ is a Markov process, then the essential supremum may be replaced with the normal supremum. Note that if the asset $S$ pays no dividends then the price of an American call option is the same as a European call option.

#### 2.4.1The Black–Scholes Model

The best known model of asset behaviour is the Black–Scholes model. Under the risk-neutral measure, the asset is governed by the SDE
 $dSt St = r-q dt + σ dWt$
where $r$ is the continuously compounded risk-free interest rate, $q$ is the continuously compounded dividend yield, $\sigma$ is the volatility of log-asset returns (i.e., $\mathrm{log}\left({S}_{t+dt}/{S}_{t}\right)$) and $W={\left({W}_{t}\right)}_{t\ge 0}$ is a standard Brownian motion. Under this model, the price of any option $P$ must solve the Black–Scholes PDE
 $∂P ∂t + ∂P ∂S r-q S+ 12 ∂2P ∂S2 σ2 S2 -rP=0$
at all times before the option is exercised. This PDE admits a closed form solution for a number of different options.

#### 2.4.2The Black–Scholes Model with Term Structure

The simplest extension of the Black–Scholes model is to allow $r$, $q$ and $\sigma$ to be deterministic functions of time so that
 $dSt St = rt - qt dt + σt dWt .$
In this case one can still obtain closed form solutions for some options, e.g., European calls and puts.

#### 2.4.3The Heston Model

Heston (1993) proposed a stochastic volatility model with the following form
 $dSt St = r-q dt + vt d W t 1 dvt = κ η-vt dt + σ vt d Wt2$
where ${W}^{\left(1\right)}$ and ${W}^{\left(2\right)}$ are two Brownian motions with quadratic covariation given by $d{〈{W}^{\left(1\right)},{W}^{\left(2\right)}〉}_{t}=\rho dt$. In this model $r$ and $q$ are the continuously compounded risk free interest rate and dividend rate respectively, $v={\left({v}_{t}\right)}_{t\ge 0}$ is the stochastic volatility process, $\eta$ is the long term mean of volatility, $\kappa$ is the rate of mean reversion, and $\sigma$ is the volatility of volatility. The prices of European call and put options in the Heston model are available in closed form up to the evaluation of an integral transform (see Lewis (2000)).

#### 2.4.4The Heston Model with Term Structure

The Heston model can be extended by allowing the coefficients to become deterministic functions of time:
 $dSt St = rt - qt dt + vt d Wt1 dvt = κt ηt - vt dt + σt vt d Wt2$
where ${W}^{\left(1\right)}$ and ${W}^{\left(2\right)}$ are two Brownian motions with quadratic covariation given by $d{〈{W}^{\left(1\right)},{W}^{\left(2\right)}〉}_{t}={\rho }_{t}dt$. When the coefficients are restricted to being piecewise constant functions of time, the prices of European call and put options can be calculated as described in Elices (2008) and Mikhailov and Nögel (2003).

### 2.5Hypergeometric Functions

The confluent hypergeometric function $M\left(a,b,x\right)$ (or ${}_{1}F_{1}\left(a;b;x\right)$) requires a number of techniques to approximate it over the whole parameter $\left(a,b\right)$ space and for all argument $\left(x\right)$ values. For $x$ well within the unit circle $\left|x\right|\le \rho <1$ (where $\rho =0.8$ say), and for relatively small parameter values, the function can be well approximated by Taylor expansions, continued fractions or through the solution of the related ordinary differential equation by an explicit, adaptive integrator. For values of $\left|x\right|>\rho$, one of several transformations can be performed (depending on the value of $x$) to reformulate the problem in terms of a new argument ${x}^{\prime }$ such that $\left|{x}^{\prime }\right|\le \rho$. If one or more of the parameters is relatively large (e.g., $\left|a\right|>30$) then recurrence relations can be used in combination to reformulate the problem in terms of parameter values of small size (e.g., $\left|a\right|<1$).
Approximations to the hypergeometric functions can therefore require all of the above techniques in sequence: a transformation to get an argument well inside the unit circle, a combination of recurrence relations to reduce the parameter sizes, and the approximation of the resulting hypergeometric function by one of a set of approximation techniques. Similar complications arise in the computation of the Gaussian Hypergeometric Function ${}_{2}F_{1}$.
All the techniques described above are based on those described in Pearson (2009).

## 3Recommendations on Choice and Use of Available Functions

### 3.1Vectorized Function Variants

Many functions in Chapter s which compute functions of a single real argument have variants which operate on vectors of arguments. For example, nag_bessel_i0 (s18aec) computes the value of the ${I}_{0}$ Bessel function for a single argument, and nag_bessel_i0_vector (s18asc) computes the same function for multiple arguments. In general it should be more efficient to use vectorized functions where possible, though to some extent this will depend on the environment from which you call the functions. See Section 4 for a complete list of vectorized functions.

### 3.2Elliptic Integrals

IMPORTANT ADVICE: users who encounter elliptic integrals in the course of their work are strongly recommended to look at transforming their analysis directly to one of the Carlson forms, rather than to the traditional canonical Legendre forms. In general, the extra symmetry of the Carlson forms is likely to simplify the analysis, and these symmetric forms are much more stable to calculate. Note, however, that this transformation may eventually lead to the following combination of Carlson forms:
 $RF 0,1-m,1 - 13 m RD 0,1-m,1$
with possibly $m\to 1$, which makes ${R}_{F}$ and ${R}_{D}$ undefined, although the combination itself remains defined and $\to 1$. The function nag_elliptic_integral_complete_E (s21bjc) returning the Legendre form $E\left(m\right)$ through this combination makes provision for such a case, and allows $m=1$.
The function nag_elliptic_integral_rc (s21bac) for ${R}_{C}$ is largely included as an auxiliary to the other functions for elliptic integrals. This integral essentially calculates elementary functions, e.g.,
 $ln⁡x =x-1 RC 1+x2 2,x , x>0; arcsin⁡x =x RC1-x2,1,x≤1; arcsinh⁡x =x RC1+x2,1,etc.$
In general this method of calculating these elementary functions is not recommended as there are usually much more efficient specific functions available in the Library. However, nag_elliptic_integral_rc (s21bac) may be used, for example, to compute $\mathrm{ln}x/\left(x-1\right)$ when $x$ is close to $1$, without the loss of significant figures that occurs when $\mathrm{ln}x$ and $x-1$ are computed separately.

### 3.3Bessel and Airy Functions

For computing the Bessel functions ${J}_{\nu }\left(x\right)$, ${Y}_{\nu }\left(x\right)$, ${I}_{\nu }\left(x\right)$ and ${K}_{\nu }\left(x\right)$ where $x$ is real and $\nu =0\text{​ or ​}1$, special functions are provided, which are much faster than the more general functions that allow a complex argument and arbitrary real $\nu \ge 0$. Similarly, special functions are provided for computing the Airy functions and their derivatives $\mathrm{Ai}\left(x\right)$, $\mathrm{Bi}\left(x\right)$, ${\mathrm{Ai}}^{\prime }\left(x\right)$, ${\mathrm{Bi}}^{\prime }\left(x\right)$ for a real argument which are much faster than the functions for complex arguments.

### 3.4Option Pricing Functions

For the Black–Scholes model, functions are provided to compute prices and derivatives (Greeks) of all the European options listed in Section 2.4. Prices for American call and put options can be obtained by calling nag_amer_bs_price (s30qcc) which uses the Bjerksund and Stensland (2002) approximation to the theoretical value. For the Black–Scholes model with term structure, prices for European call and put options can be obtained by calling nag_pde_bs_1d_analytic (d03ndc). The prices of European call and put options in the standard Heston model can be obtained by calling nag_heston_price (s30nac), while nag_heston_term (s30ncc) returns the same prices in the Heston model with term structure.

### 3.5Hypergeometric Functions

Two functions are provided for the confluent hypergeometric function ${}_{1}F_{1}$. Both return values for ${}_{1}F_{1}\left(a;b;x\right)$ where parameters $a$ and $b$, and argument $x$, are all real, but one variant works in a scaled form designed to avoid unnecessary loss of precision. The unscaled function nag_specfun_1f1_real (s22bac) is easier to use and should be chosen in the first instance, changing to the scaled function nag_specfun_1f1_real_scaled (s22bbc) only if problems are encountered. Similar considerations apply to the Gaussian hypergeometric function functions nag_specfun_2f1_real (s22bec) and nag_specfun_2f1_real_scaled (s22bfc).

## 4Functionality Index

 Airy function,
 Ai, real argument,
 scalar nag_airy_ai (s17agc)
 vectorized nag_airy_ai_vector (s17auc)
 Ai or Ai′, complex argument, optionally scaled nag_complex_airy_ai (s17dgc)
 Ai′, real argument,
 scalar nag_airy_ai_deriv (s17ajc)
 vectorized nag_airy_ai_deriv_vector (s17awc)
 Bi, real argument,
 scalar nag_airy_bi (s17ahc)
 vectorized nag_airy_bi_vector (s17avc)
 Bi or Bi′, complex argument, optionally scaled nag_complex_airy_bi (s17dhc)
 Bi′, real argument,
 scalar nag_airy_bi_deriv (s17akc)
 vectorized nag_airy_bi_deriv_vector (s17axc)
 Arccosh,
 inverse hyperbolic cosine nag_arccosh (s11acc)
 Arcsinh,
 inverse hyperbolic sine nag_arcsinh (s11abc)
 Arctanh,
 inverse hyperbolic tangent nag_arctanh (s11aac)
 Bessel function,
 I0, real argument,
 scalar nag_bessel_i0 (s18aec)
 vectorized nag_bessel_i0_vector (s18asc)
 I1, real argument,
 scalar nag_bessel_i1 (s18afc)
 vectorized nag_bessel_i1_vector (s18atc)
 Iα + n − 1(x) or Iα − n + 1(x), real argument nag_bessel_i_alpha (s18ejc)
 Iν, complex argument, optionally scaled nag_complex_bessel_i (s18dec)
 Iν / 4(x), real argument nag_bessel_i_nu (s18eec)
 J0, real argument,
 scalar nag_bessel_j0 (s17aec)
 vectorized nag_bessel_j0_vector (s17asc)
 J1, real argument,
 scalar nag_bessel_j1 (s17afc)
 vectorized nag_bessel_j1_vector (s17atc)
 Jα  +  n − 1(x) or Jα − n + 1(x), real argument nag_bessel_j_alpha (s18ekc)
 Jα ± n(z), complex argument nag_complex_bessel_j_seq (s18gkc)
 Jν, complex argument, optionally scaled nag_complex_bessel_j (s17dec)
 K0, real argument,
 scalar nag_bessel_k0 (s18acc)
 vectorized nag_bessel_k0_vector (s18aqc)
 K1, real argument,
 vectorized nag_bessel_k1_vector (s18arc)
 Kα + n(x), real argument nag_bessel_k_alpha (s18egc)
 Kν, complex argument, optionally scaled nag_complex_bessel_k (s18dcc)
 Kν / 4(x), real argument nag_bessel_k_nu (s18efc)
 Y0, real argument,
 scalar nag_bessel_y0 (s17acc)
 vectorized nag_bessel_y0_vector (s17aqc)
 Y1, real argument,
 vectorized nag_bessel_y1_vector (s17arc)
 Yν, complex argument, optionally scaled nag_complex_bessel_y (s17dcc)
 beta function,
 incomplete nag_incomplete_beta (s14ccc)
 Complement of the Cumulative Normal distribution nag_cumul_normal_complem (s15acc)
 Complement of the Error function,
 complex argument, scaled nag_complex_erfc (s15ddc)
 real argument, scaled nag_erfcx (s15agc)
 Cosine,
 hyperbolic nag_cosh (s10acc)
 Cosine Integral nag_cos_integral (s13acc)
 Cumulative Normal distribution function nag_cumul_normal (s15abc)
 Dawson's Integral nag_dawson (s15afc)
 Elliptic functions, Jacobian, sn, cn, dn,
 complex argument nag_jacobian_elliptic (s21cbc)
 real argument nag_real_jacobian_elliptic (s21cac)
 Elliptic integral,
 general,
 of 2nd kind, F(z , k′ , a , b) nag_general_elliptic_integral_f (s21dac)
 Legendre form,
 complete of 1st kind, K(m) nag_elliptic_integral_complete_K (s21bhc)
 complete of 2nd kind, E (m) nag_elliptic_integral_complete_E (s21bjc)
 of 1st kind, F(ϕ | m) nag_elliptic_integral_F (s21bec)
 of 2nd kind, E (ϕ ∣ m) nag_elliptic_integral_E (s21bfc)
 of 3rd kind, Π (n ; ϕ ∣ m) nag_elliptic_integral_pi (s21bgc)
 symmetrised,
 degenerate of 1st kind, RC nag_elliptic_integral_rc (s21bac)
 of 1st kind, RF nag_elliptic_integral_rf (s21bbc)
 of 2nd kind, RD nag_elliptic_integral_rd (s21bcc)
 of 3rd kind, RJ nag_elliptic_integral_rj (s21bdc)
 Erf,
 real argument nag_erf (s15aec)
 Erfc,
 complex argument, scaled nag_complex_erfc (s15ddc)
 erfcx,
 real argument nag_erfcx (s15agc)
 Exponential Integral nag_exp_integral (s13aac)
 Fresnel integral,
 C,
 vectorized nag_fresnel_c_vector (s20arc)
 S,
 scalar nag_fresnel_s (s20acc)
 vectorized nag_fresnel_s_vector (s20aqc)
 Gamma function nag_gamma (s14aac)
 Gamma function,
 incomplete nag_incomplete_gamma (s14bac)
 Generalized factorial function nag_gamma (s14aac)
 Hankel function Hν(1) or Hν(2),
 complex argument, optionally scaled nag_complex_hankel (s17dlc)
 Hypergeometric functions,
 1F1 (a ; b ; x) , confluent, real argument nag_specfun_1f1_real (s22bac)
 1F1(a ; b ; x), confluent, real argument, scaled form nag_specfun_1f1_real_scaled (s22bbc)
 2F1 ( a , b ;  c ; x ) , Gauss, real argument nag_specfun_2f1_real (s22bec)
 2F1 ( a , b ;  c ; x ) , Gauss, real argument, scaled form nag_specfun_2f1_real_scaled (s22bfc)
 Jacobian theta functions θk(x , q),
 real argument nag_jacobian_theta (s21ccc)
 Kelvin function,
 bei x,
 scalar nag_kelvin_bei (s19abc)
 vectorized nag_kelvin_bei_vector (s19apc)
 ber x,
 scalar nag_kelvin_ber (s19aac)
 vectorized nag_kelvin_ber_vector (s19anc)
 kei x,
 vectorized nag_kelvin_kei_vector (s19arc)
 ker x,
 scalar nag_kelvin_ker (s19acc)
 vectorized nag_kelvin_ker_vector (s19aqc)
 Legendre functions of 1st kind Pnm(x), Pnm(x) nag_legendre_p (s22aac)
 Logarithm of 1 + x nag_shifted_log (s01bac)
 Logarithm of beta function,
 real nag_log_beta (s14cbc)
 Logarithm of gamma function,
 complex nag_complex_log_gamma (s14agc)
 real nag_log_gamma (s14abc)
 real, scaled nag_scaled_log_gamma (s14ahc)
 Modified Struve function,
 I0 − L0, real argument,
 scalar nag_struve_i0ml0 (s18gcc)
 I1 − L1, real argument,
 scalar nag_struve_i1ml1 (s18gdc)
 L0, real argument,
 scalar nag_struve_l0 (s18gac)
 L1, real argument,
 scalar nag_struve_l1 (s18gbc)
 Option Pricing,
 American option, Bjerksund and Stensland option price nag_amer_bs_price (s30qcc)
 Asian option, geometric continuous average rate price nag_asian_geom_price (s30sac)
 Asian option, geometric continuous average rate price with Greeks nag_asian_geom_greeks (s30sbc)
 binary asset-or-nothing option price nag_binary_aon_price (s30ccc)
 binary asset-or-nothing option price with Greeks nag_binary_aon_greeks (s30cdc)
 binary cash-or-nothing option price nag_binary_con_price (s30cac)
 binary cash-or-nothing option price with Greeks nag_binary_con_greeks (s30cbc)
 Black–Scholes–Merton option price nag_bsm_price (s30aac)
 Black–Scholes–Merton option price with Greeks nag_bsm_greeks (s30abc)
 European option, option prices, using Merton jump-diffusion model nag_jumpdiff_merton_price (s30jac)
 European option, option price with Greeks, using Merton jump-diffusion model nag_jumpdiff_merton_greeks (s30jbc)
 floating-strike lookback option price nag_lookback_fls_price (s30bac)
 floating-strike lookback option price with Greeks nag_lookback_fls_greeks (s30bbc)
 Heston's model option price nag_heston_price (s30nac)
 Heston's model option price with Greeks nag_heston_greeks (s30nbc)
 Heston's model with term structure nag_heston_term (s30ncc)
 standard barrier option price nag_barrier_std_price (s30fac)
 Polygamma function,
 ψ(n)(x), real x nag_real_polygamma (s14aec)
 ψ(n)(z), complex z nag_complex_polygamma (s14afc)
 psi function nag_polygamma_fun (s14acc)
 psi function derivatives, scaled nag_polygamma_deriv (s14adc)
 Scaled modified Bessel function(s),
 e − (x)I0(x), real argument,
 scalar nag_bessel_i0_scaled (s18cec)
 vectorized nag_bessel_i0_scaled_vector (s18csc)
 e − (x)I1(x), real argument,
 scalar nag_bessel_i1_scaled (s18cfc)
 vectorized nag_bessel_i1_scaled_vector (s18ctc)
 e − x Iν / 4(x), real argument nag_bessel_i_nu_scaled (s18ecc)
 ex K0 (x), real argument,
 scalar nag_bessel_k0_scaled (s18ccc)
 vectorized nag_bessel_k0_scaled_vector (s18cqc)
 ex K1 (x), real argument,
 scalar nag_bessel_k1_scaled (s18cdc)
 vectorized nag_bessel_k1_scaled_vector (s18crc)
 exKα + n(x), real argument nag_bessel_k_alpha_scaled (s18ehc)
 exKν / 4(x), real argument nag_bessel_k_nu_scaled (s18edc)
 Sine,
 hyperbolic nag_sinh (s10abc)
 Struve function,
 H0, real argument,
 scalar nag_struve_h0 (s17gac)
 H1, real argument,
 scalar nag_struve_h1 (s17gbc)
 Tangent,
 hyperbolic nag_tanh (s10aac)
 Zeros of Bessel functions Jα(x), Jα′(x), Yα(x), Yα′(x),
 scalar nag_bessel_zeros (s17alc)

None.

None.

## 7References

NIST Digital Library of Mathematical Functions
Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications
Bjerksund P and Stensland G (2002) Closed form valuation of American options Discussion Paper 2002/09 NHH Bergen Norway http://www.nhh.no/
Carlson B C (1965) On computing elliptic integrals and functions J. Math. Phys. 44 36–51
Carlson B C (1977a) Special Functions of Applied Mathematics Academic Press
Carlson B C (1977b) Elliptic integrals of the first kind SIAM J. Math. Anal. 8 231–242
Clenshaw C W (1962) Chebyshev Series for Mathematical Functions Mathematical tables HMSO
Elices A (2008) Models with time-dependent parameters using transform methods: application to Heston’s model arXiv:0708.2020v2
Fox L and Parker I B (1968) Chebyshev Polynomials in Numerical Analysis Oxford University Press
Haug E G (2007) The Complete Guide to Option Pricing Formulas (2nd Edition) McGraw-Hill
Heston S (1993) A closed-form solution for options with stochastic volatility with applications to bond and currency options Review of Financial Studies 6 327–343
Joshi M S (2003) The Concepts and Practice of Mathematical Finance Cambridge University Press
Lewis A L (2000) Option valuation under stochastic volatility Finance Press, USA
Mikhailov S and Nögel U (2003) Heston’s Stochastic Volatility Model Implementation, Calibration and Some Extensions Wilmott Magazine July/August 74–79
Pearson J (2009) Computation of hypergeometric functions MSc Dissertation, Mathematical Institute, University of Oxford
Schonfelder J L (1976) The production of special function routines for a multi-machine library Softw. Pract. Exper. 6(1)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2017