Known Issues for the NAG Library CL Interface
This document reflects all reported and resolved issues that affect various releases of the NAG Library CL Interface.
Some of these issues may have been fixed at intermediate "point" releases of the Library, while other fixes are scheduled for incorporation at future releases. For library Marks where those fixes are not yet incorporated, a workaround for the known issue is provided wherever possible.
To find the Mark and point release number of your library, call NAG function a00aac( ).
Order the issues by
Synopsis  Overflow may occur if the function attempts to scale the polynomial coefficients. 
Description  In rare circumstances overflow may be observed if ${\mathbf{scale}}=\mathrm{Nag\_TRUE}$. 
Severity  Noncritical 
Issue Since Mark  7 
Workaround  Set argument ${\mathbf{scale}}=\mathrm{Nag\_FALSE}$. 
Synopsis  Multilevel wavelets cannot handle periodic end extension. 
Description  When ${\mathbf{mode}}=\mathrm{Nag\_Periodic}$ and ${\mathbf{wtrans}}=\mathrm{Nag\_MultiLevel}$ the multilevel wavelet transform functions do not work properly if $n$ is not a power of 2. 
Severity  Noncritical 
Issue Since Mark  9 
Fixed at Mark  23 
Workaround  The option combination of a multilevel wavelet transform using a periodic end extension is currently disallowed; a call to the initialization function c09aac with this combination will return with an error code.
For multilevel analysis of periodic data, you are advised to experiment with the alternative end conditions; the periodic property of the data can also be used to extend the data set in both directions to points that better suit the alternative end condition (e.g., extend the data to next maximum or minimum).

Synopsis  Initialization and option setting does not work when using the long name. 
Description  Initialization and option setting for the sparse grid function nag_quad_md_sgq_multi_vec (d01esc) using nag_quad_opt_set (d01zkc) does not work when using the long name nag_quad_md_sgq_multi_vec in the option string.
It does work when using the short name d01esc in the option string.

Severity  Noncritical 
Issue Since Mark  25 
Fixed at Mark  25.1 
Workaround  Initializing and setting options for nag_quad_md_sgq_multi_vec (d01esc) via calls to nag_quad_opt_set (d01zkc) should use option strings containing the short name d01esc rather than the long name. 
Synopsis  Segmentation faults when optional parameter ${\mathbf{Index\; Level}}$ is set to a value greater than ${m}_{q}$. 
Description  Segmentation faults or other array bound violation problems may occur if the value of ${\mathbf{Index\; Level}}$ (set via a call to d01zkc) is greater than ${m}_{q}$, the maximum level of the underlying quadrature rule. 
Severity  Critical 
Issue Since Mark  25 
Fixed at Mark  25.4 
Workaround  Do not set ${\mathbf{Index\; Level}}$ to more than 9 when using Gauss–Patterson or more than 12 when using Clenshaw–Curtis. 
Synopsis  ${\mathbf{Quadrature\; Rule}}=\mathrm{GP}$ is not accepted as a valid option. 
Description  When setting the quadrature rule for d01esc using the d01zkc option setting function, the documented choice ${\mathbf{Quadrature\; Rule}}=\mathrm{GP}$ is not recognised as a valid option and an error is reported. 
Severity  Noncritical 
Issue Since Mark  25 
Fixed at Mark  25.4 
Workaround  The alternatives ${\mathbf{Quadrature\; Rule}}=\mathrm{GaussPatterson}\text{}\text{or}\mathrm{GPATT}$ may be used instead.
Note: GaussPatterson is the default choice for the quadrature rule in d01esc, so in general it will not be necessary to specify this option.

Synopsis  Stack size or thread safety problems may be observed with some d06 functions. 
Description  d06aac, d06abc and d06acc contain large local arrays that may cause stack size and/or thread safety problems. 
Severity  Critical 
Issue Since Mark  7 
Fixed at Mark  23.3 
Workaround  Do not use these functions in a multithreaded environment. For serial execution, set stack size limit to 10MB or greater. 
Synopsis  Although the documented constraint on ${\mathbf{nvmax}}$ is ${\mathbf{nvmax}}\ge {\mathbf{nvb}}+{\mathbf{nvint}}$, the actual required minimum for ${\mathbf{nvmax}}$ is ${\mathbf{nvb}}+{\mathbf{nvint}}+12$.
For some small scale problems, setting ${\mathbf{nvmax}}={\mathbf{nvb}}+{\mathbf{nvint}}$ will give unpredictable results and could produce a segmentation fault.

Description  Although the documented constraint on ${\mathbf{nvmax}}$ is ${\mathbf{nvmax}}\ge {\mathbf{nvb}}+{\mathbf{nvint}}$, the actual required minimum for ${\mathbf{nvmax}}$ is ${\mathbf{nvb}}+{\mathbf{nvint}}+12$.
For some small scale problems, setting ${\mathbf{nvmax}}={\mathbf{nvb}}+{\mathbf{nvint}}$ will give unpredictable results and could produce a segmentation fault.
The problem is remedied by setting ${\mathbf{nvmax}}={\mathbf{nvb}}+{\mathbf{nvint}}+12$ and ensuring that the arrays ${\mathbf{coor}}$ and ${\mathbf{conn}}$ are correspondingly large enough.

Severity  Critical 
Issue Since Mark  7 
Fixed at Mark  26 
Workaround  Set ${\mathbf{nvmax}}\ge {\mathbf{nvb}}+{\mathbf{nvint}}+12$; allocate the arrays ${\mathbf{coor}}$ and ${\mathbf{conn}}$ using this value of ${\mathbf{nvmax}}$. 
Synopsis  d06acc returns ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NE\_MESH\_ERROR}}$ error for some boundary meshes due to an internal scaling issue. 
Description  d06acc returns ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NE\_MESH\_ERROR}}$ error for some boundary meshes due to an internal scaling issue. 
Severity  Noncritical 
Issue Since Mark  7 
Fixed at Mark  28.4 
Workaround  Scale input boundary mesh prior to calling d06acc so that ${\mathbf{}}{\mathbf{}}=0$ and ${\mathbf{}}{\mathbf{}}=1$. 
Synopsis  The algorithm underlying interpolation functions e01sgc, e01shc, e01tgc and e01thc was modified at Mark 26 and Mark 26.1; different results will be obtained when using these functions than previously. 
Description  The algorithm underlying interpolation functions e01sgc, e01shc, e01tgc and e01thc was modified at Mark 26 to improve perceived deficiencies. In particular, at earlier library Marks the evaluation functions would not attempt to return any useful result if an evaluation point was not close enough to any of the original data points, and this issue was addressed at Mark 26.
At Mark 26.1 further work was done on the functions because they had been found not to work well on gridded data sets (as opposed to the random data sets that they are primarily intended for).
It should be noted that because of the various underlying changes to the functions, the precise results returned from Mark 26 onwards will not usually be identical to those before Mark 26.

Severity  Noncritical 
Issue Since Mark  26 
Fixed at Mark  26.1 
Workaround  Not applicable. 
Synopsis  e01shc will occasionally incorrectly identify a point as being outside the region defined by the interpolant. 
Description  e01shc will occasionally incorrectly identify a point as being outside the region defined by the interpolant. This leads to the function value being extrapolated rather than interpolated and can lead to incorrect results. 
Severity  Noncritical 
Issue Since Mark  26.0 
Fixed at Mark  27.1 
Workaround  None. 
Synopsis  Illconditioned data sets may cause e02gac to get stuck in an infinite loop. 
Description  Certain illconditioned data sets could cause e02gac to get stuck in an infinite loop. 
Severity  Critical 
Issue Since Mark  7 
Fixed at Mark  26 
Workaround  As a workaround, it may be possible to avoid the infinite loop by reordering the points in the input data. 
Synopsis  e04dgc returns wrong gradient values in ${\mathbf{g}}$. 
Description  e04dgc prints gradient values correctly, but returns the wrong values in argument ${\mathbf{g}}$. 
Severity  Noncritical 
Issue Since Mark  7 
Fixed at Mark  25 
Workaround  In ${\mathbf{objfun}}$, when ${\mathbf{g}}$ is set, push these values into the ${\mathbf{user}}$ structure. Following the call to e04dgc the correct values for ${\mathbf{g}}$ can then be obtained from ${\mathbf{user}}$. 
Synopsis  Internal buffer overflow in e04fcc. 
Description  When the grade of the optimization problem drops to zero, an internal buffer overflow occurs. This destroys some of the internal state of the optimizer, typically causing it to stop prematurely.
Scope of the problem:
If the grade of the optimization problem is nonzero on exit from e04fcc, then the bug is not triggered and that particular optimization is unaffected. If the grade is zero on exit, then the optimization is affected in all supported CL marks.
How the problem manifests:
Severity:
Since the solver is typically close to convergence when the grade drops to zero, the returned solution is usually pretty good. The bug fix is unlikely to improve the results of e04fcc significantly.

Severity  Noncritical 
Issue Since Mark  8 
Fixed at Mark  24.2 
Workaround  There is no practical workaround. 
Synopsis  In very rare cases, the algorithm used by e04lbc may become trapped in an infinite loop. 
Description  The function might loop unnecessarily and finish with ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NW\_TOO\_MANY\_ITER}}$ when a variable lying on the boundary is cyclically added and removed to/from free variables. This can happen only at points with indefinite Hessian and very small projected gradients when one variable is lying on the boundary and another one is very close to it. 
Severity  Critical 
Issue Since Mark  7 
Fixed at Mark  25 
Workaround  Unfortunately there is no convenient workaround. 
Synopsis  ${\mathbf{stats}}$ and ${\mathbf{rinfo}}$ were not correctly filled by the presolver. 
Description  The arrays ${\mathbf{stats}}$ and ${\mathbf{rinfo}}$ were not correctly filled when the problem was entirely solved by the presolver. It now returns the correct values.
The optional parameter ${\mathbf{Print\; Solution}}$ now correctly writes the linear constraints dual variables when no bounds are defined on the variables.

Severity  Noncritical 
Issue Since Mark  26.1 
Fixed at Mark  27 
Workaround  Don't rely on ${\mathbf{rinfo}}\left[0\right],{\mathbf{rinfo}}\left[1\right]$ to hold the primal and dual objective in this case and recompute it as ${c}^{\prime}x$ and $by$, respectively. 
Synopsis  e04mtc does not report the correct solution when $3$ or more columns are proportional to each other in the constraint matrix. 
Description  e04mtc does not report the correct solution when $3$ or more columns are proportional to each other in the constraint matrix. In such a case, the solution reported may be infeasible. 
Severity  Noncritical 
Issue Since Mark  26.1 
Fixed at Mark  27 
Workaround  A workaround is to disable the more complex presolve operations by setting the optional parameter ${\mathbf{LP\; Presolve}}=\mathrm{BASIC}$. This may slow down the solver depending on the problem. 
Synopsis  In some very rare cases, the solution reported presents big violations on a small number of linear constraints. 
Description  In some very rare cases, the solution reported presents big violations on a small number of linear constraints. 
Severity  Noncritical 
Issue Since Mark  26.1 
Fixed at Mark  27.1 
Workaround  A workaround is to deactivate the more complex presolver operations with the optional parameter ${\mathbf{LP\; Presolve}}=\mathrm{BASIC}$. 
Synopsis  In some very rare cases, e04mtc reports problem infeasibility for a feasible problem. 
Description  In some very rare cases, the solver reports problem infeasibility when there are numerical difficulties. 
Severity  Noncritical 
Issue Since Mark  26.1 
Fixed at Mark  28.6 
Workaround  Unfortunately there is no convenient workaround. 
Synopsis  Infeasible bounds defined by e04rjc of a variable are ignored and infeasibility is not reported. 
Description  When infeasible bounds are defined by e04rjc for a variable, instead of reporting problem infeasibility, the bounds are overridden and wrong solution may be reported. 
Severity  Noncritical 
Issue Since Mark  26.1 
Fixed at Mark  27.1 
Workaround  A workaround is to deactivate the more complex presolver operations with the optional parameter ${\mathbf{LP\; Presolve}}=\mathrm{BASIC}$ for e04mtc and ${\mathbf{SOCP\; Presolve}}=\mathrm{BASIC}$ for e04ptc. 
Synopsis  Internal file overflow. 
Description  If you set a ${\mathbf{New\; Basis\; File}}$ in e04nqc, e04vhc and e04wdc and your total problem size ( ${\mathbf{n}}+{\mathbf{m}}$, ${\mathbf{n}}+{\mathbf{nf}}$ or ${\mathbf{n}}+{\mathbf{nclin}}+{\mathbf{ncnln}}$, respectively) is greater than 80 you will experience an internal buffer overflow and possible program crash. 
Severity  Critical 
Issue Since Mark  9.4 
Fixed at Mark  23 
Workaround  Unfortunately there is no convenient workaround. The only way to avoid this crash is to not specify a ${\mathbf{New\; Basis\; File}}$ or to have a small enough problem. 
Synopsis  Optional parameters ${\mathbf{List}}$ and ${\mathbf{Nolist}}$ are not handled correctly. 
Description  Functions e04nrc, e04vkc and e04wec do not handle optional parameters ${\mathbf{List}}$ and ${\mathbf{Nolist}}$ correctly. Specifying ${\mathbf{List}}$ does not alter the behaviour of subsequent functions in the suite, and specifying ${\mathbf{Nolist}}$ erroneously reports an error. 
Severity  Noncritical 
Issue Since Mark  8 
Fixed at Mark  27.3 
Workaround  Function e04nsc should be used instead to set optional parameters ${\mathbf{List}}$ or ${\mathbf{Nolist}}$. 
Synopsis  e04stc returns Lagrangian multipliers in the wrong order. 
Description  The Lagrangian multipliers returned in ${\mathbf{u}}$ are in the wrong order:

Severity  Noncritical 
Issue Since Mark  26 
Fixed at Mark  26.1 
Workaround  The order described in the documentation is now used. 
Synopsis  An inner optimization step might be unnecessarily resolved in certain cases. 
Description  If the solver is run in the mode when some (or all) derivatives might be missing ( ${\mathbf{options}}\mathbf{.}{\mathbf{obj\_deriv}}=\mathrm{Nag\_FALSE}$ or ${\mathbf{options}}\mathbf{.}{\mathbf{con\_deriv}}=\mathrm{Nag\_FALSE}$), however, all derivatives are provided, the solver might trigger an extra resolve of the inner optimization step if it detects numerical difficulties. This extra step normally switches derivative approximations to central differences and thus is aimed to improve stability of the problem, however, in this case it doesn't change anything (all derivatives are provided by the user and no derivative approximation takes place) and thus is unnecessary. 
Severity  Noncritical 
Issue Since Mark  7 
Fixed at Mark  26.1 
Workaround  You might want to set ${\mathbf{options}}\mathbf{.}{\mathbf{obj\_deriv}}=\mathrm{Nag\_TRUE}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{con\_deriv}}=\mathrm{Nag\_TRUE}$ if all derivatives are provided. 
Synopsis  A possible buffer overflow in the printing of the derivative checker of e04unc. 
Description  If ${\mathbf{options}}\mathbf{.}{\mathbf{verify\_grad}}$ is set to a full check of the objective function and/or the constraints ( ${\mathbf{options}}\mathbf{.}{\mathbf{verify\_grad}}=\mathrm{Nag\_CheckObj}\text{,}\mathrm{Nag\_CheckObjCon}\text{,}\mathrm{Nag\_CheckCon}$, ...) and ${\mathbf{options}}\mathbf{.}{\mathbf{print\_deriv}}=\mathrm{Nag\_D\_Full}$ and the checked function has zero elements in the derivatives, internal buffer might overflow which might lead to a crash. 
Severity  Critical 
Issue Since Mark  7 
Fixed at Mark  26 
Workaround  To avoid the problem, use ${\mathbf{options}}\mathbf{.}{\mathbf{print\_deriv}}=\mathrm{Nag\_D\_Sum}\text{}\text{or}\mathrm{Nag\_D\_NoPrint}$. 
Synopsis  When the objective function has no separated linear part, using userdefined names for variables and constraints might lead to a crash. 
Description  When the objective function only has the nonlinear part defined without a separated linear part, the solver might crash when trying to read userdefined names for variables and constraints. 
Severity  Critical 
Issue Since Mark  8 
Fixed at Mark  27.1 
Workaround  Unfortunately there is no convenient workaround. The only way to avoid this crash is to not specify names for variables and constraints. 
Synopsis  Information about the last constraint might not be printed. 
Description  If the problem has a nonlinear objective function without a linear part and ${\mathbf{objrow}}<{\mathbf{nf}}$, the last constraint is not printed in the final information about the solution (Rows section). 
Severity  Noncritical 
Issue Since Mark  8 
Fixed at Mark  26 
Workaround  None. 
Synopsis  Usersupplied character strings containing spaces may cause garbled error messages. 
Description  Some functions produce error messages containing character data that has been supplied through the argument ${\mathbf{List}}$ by the user. An example is e04vhc, where the ${\mathbf{xnames}}$ or ${\mathbf{fnames}}$ can be referred to in error messages. Having spaces in these strings confuses the internal errormessage splitter, which splits on spaces. Thus, error messages returned by the function may be garbled. 
Severity  Noncritical 
Issue Since Mark  9 
Fixed at Mark  23 
Workaround  Make sure userprovided character data is without spaces 
Synopsis  An unhelpful error exit is returned if e05ucc is called with incorrectly initialized optional parameter arrays ${\mathbf{iopts}}$ and ${\mathbf{opts}}$. 
Description  Function e05ucc returns ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NE\_INTERNAL\_ERROR}}$ if e05ucc is called without previously having called e05zkc with argument ‘Initialize = e05ucc’. 
Severity  Noncritical 
Issue Since Mark  23 
Fixed at Mark  24 
Workaround  Call e05zkc with argument ‘Initialize = e05ucc’ before calling e05ucc. 
Synopsis  Function f02wef may fail to compute any results, but with no error flag set. 
Description  Certain combinations of arguments ${\mathbf{wantp}}$ and ${\mathbf{wantq}}$ together with their associated output arrays ${\mathbf{q}}$ and ${\mathbf{pt}}$ can cause f02wec to fail to compute any results, but with no error flag set. Specifically, the function documentation states that in some circumstances array argument ${\mathbf{q}}$ may be a NULL pointer (in which case the left hand singular vectors, if required, are stored in array ${\mathbf{a}}$). However, an auxiliary function called by f02wef checks whether ${\mathbf{q}}$ is NULL, and if so f02wef silently fails. 
Severity  Critical 
Issue Since Mark  25 
Fixed at Mark  26 
Workaround  Always supply a nonNULL array argument ${\mathbf{q}}$ even if the documentation for f02wef states that a NULL pointer is allowed. 
Synopsis  Function f02xec may fail to compute any results, but with no error flag set. 
Description  Certain combinations of arguments ${\mathbf{wantp}}$ and ${\mathbf{wantq}}$ together with their associated output arrays ${\mathbf{q}}$ and ${\mathbf{ph}}$ can cause f02xec to fail to compute any results, but with no error flag set. Specifically, the function documentation states that in some circumstances array argument ${\mathbf{q}}$ may be a NULL pointer (in which case the left hand singular vectors, if required, are stored in array ${\mathbf{a}}$). However, an auxiliary function called by f02xec checks whether ${\mathbf{q}}$ is NULL, and if so f02xec silently fails. 
Severity  Critical 
Issue Since Mark  25 
Fixed at Mark  26 
Workaround  Always supply a nonNULL array argument ${\mathbf{q}}$ even if the documentation for f02xec states that a NULL pointer is allowed. 
Synopsis  Multithreaded versions of the functions f11bec, f11bsc, f11gec and f11gsc may produce slightly different results when run on multiple threads. 
Description  Multithreaded versions of the functions f11bec, f11bsc, f11gec and f11gsc may produce slightly different results when run on multiple threads, e.g., the number of iterations to solution and the computed matrix norms and termination criteria reported by the associated monitoring functions. A bug affecting f11bec and f11gec has been fixed, and parallel vector dot products have been modified in all functions to improve consistency of results. 
Severity  Noncritical 
Issue Since Mark  26 
Fixed at Mark  27.1 
Workaround  None. 
Synopsis  f16qec and f16tec reference diagonal elements when unit diagonal entries are assumed. 
Description  f16qec and f16tec reference and copy diagonal elements when unit diagonal entries are assumed. 
Severity  Critical 
Issue Since Mark  7 
Fixed at Mark  28 
Workaround  Nothing needs to be done unless diagonal entries of the target matrix contain useful data prior to a call of f16qec or f16tec with ${\mathbf{diag}}=\mathrm{Nag\_UnitDiag}$, in which case the useful data should be saved and copied back to the diagonal of the target matrix after the call to either f16qec or f16tec. 
Synopsis  f16qfc, if called with ${\mathbf{pdb}}$ that violates minimum contraints, will produce a segmentation fault. 
Description  f16qfc, if called with ${\mathbf{pdb}}$ that violates minimum contraints, will produce a segmentation fault. 
Severity  Critical 
Issue Since Mark  7 
Fixed at Mark  28 
Workaround  Call f16qfc with ${\mathbf{pdb}}$ that meets the documented minimum contraint. 
Synopsis  f16rbc and f16ubc return $0$ if ${\mathbf{kl}}$ or ${\mathbf{ku}}$ is $0$, instead of the correct norm. ${\mathbf{pdab}}$ is incorrectly forced to be at least ${\mathbf{m}}$ when $m=n$. 
Description  f16rbc and f16ubc mistakenly make a quick return if ${\mathbf{kl}}$ or ${\mathbf{ku}}$ is $0$, instead of computing the correct value for the requested norm. Also, ${\mathbf{pdab}}$ is incorrectly forced to be at least ${\mathbf{m}}$ when $m=n$. 
Severity  Critical 
Issue Since Mark  9.1 
Fixed at Mark  23 
Workaround  If ${\mathbf{kl}}$ or ${\mathbf{ku}}$ is $0$, use the general matrixnorm functions f16rac or f16uac, with the input matrix in full storage. If $m=n$, make sure that ${\mathbf{pdab}}\ge {\mathbf{m}}$. 
Synopsis  Incorrect Frobenius norm returned in some cases. 
Description  When calling one of the functions: f16rdc, f16rec, f16udc, f16uec, f16ufc and f16ugc with ${\mathbf{order}}=\mathrm{Nag\_RowMajor}$ and ${\mathbf{norm}}=\mathrm{Nag\_FrobeniusNorm}$, the returned norm can be incorrect. 
Severity  Critical 
Issue Since Mark  23 
Fixed at Mark  28 
Workaround  These functions will return the correct norm if the ${\mathbf{order}}$ argument is set to $\mathrm{Nag\_ColMajor}$ and the ${\mathbf{uplo}}$ argument is flipped, i.e., from $\mathrm{Nag\_Upper}$ to $\mathrm{Nag\_Lower}$ or vice versa. 
Synopsis  f16smc returns wrong update of $A$ when ${\mathbf{a}}$ is stored in row major order and ${\mathbf{y}}$ is to be conjugated. 
Description  When f16smc is called with ${\mathbf{order}}=\mathrm{Nag\_RowMajor}$ and ${\mathbf{conj}}=\mathrm{Nag\_Conj}$, $A$ is updated as though ${\mathbf{conj}}=\mathrm{Nag\_NoConj}$. 
Severity  Critical 
Issue Since Mark  8 
Fixed at Mark  28 
Workaround  Call f16smc with ${\mathbf{conj}}=\mathrm{Nag\_NoConj}$ and conjugate ${\mathbf{y}}$ prior to call. 
Synopsis  f16tac stops program execution when called with ${\mathbf{pda}}<{\mathbf{n}}$. 
Description  f16tac, when called with ${\mathbf{pda}}<{\mathbf{n}}$ does not return error code ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NE\_INT\_2}}$, but terminates program execution. 
Severity  Critical 
Issue Since Mark  8 
Fixed at Mark  28 
Workaround  Call f16tac with ${\mathbf{pda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$. 
Synopsis  f16tfc returns incorrect results when computing a transposed copy of a matrix. 
Description  f16tfc returns incorrect results when computing a transposed copy of a matrix. 
Severity  Critical 
Issue Since Mark  7 
Fixed at Mark  28 
Workaround  Call f01cwf with ${\mathbf{alpha}}=\text{one}$ and ${\mathbf{beta}}=\text{zero}$; for rowordered matrices, ${\mathbf{m}}$ and ${\mathbf{n}}$ should be switched. 
Synopsis  The returned matrix is not a valid correlation matrix. 
Description  The algorithm computes an incorrect value for ${\mathbf{alpha}}$. Thus the returned matrix is not positive definite as stated, and is not a valid correlation matrix. 
Severity  Critical 
Issue Since Mark  25 
Fixed at Mark  25.3 
Workaround  Unfortunately there is no convenient workaround. 
Synopsis  When ${\mathbf{mean}}=\mathrm{Nag\_AboutZero}$, output arguments ${\mathbf{a}}$ and ${\mathbf{a\_serr}}$ are not initialized. 
Description  When ${\mathbf{mean}}=\mathrm{Nag\_AboutZero}$, output arguments ${\mathbf{a}}$ and ${\mathbf{a\_serr}}$ are not initialized. These values relate to a regression constant that is only relevant in the ${\mathbf{mean}}=\mathrm{Nag\_AboutMean}$ case. However, the code for ${\mathbf{mean}}=\mathrm{Nag\_AboutZero}$ should initialize them to $0.0$. This was not done, allowing previously set values or random results to be erroneously returned. 
Severity  Noncritical 
Issue Since Mark  7 
Fixed at Mark  27.3 
Workaround  The safest solution is to manually set these to $0.0$ (but only in the ${\mathbf{mean}}=\mathrm{Nag\_AboutZero}$ case) immediately after calling this function. 
Synopsis  Incorrect results are returned when performing a Mallows type regression. 
Description  Incorrect results are returned when performing a Mallows type regression, averaging over residuals. 
Severity  Noncritical 
Issue Since Mark  7 
Fixed at Mark  26.1 
Workaround  None. 
Synopsis  Segmentation fault caused by access past the end of an array. 
Description  An error can occur when there are multiple blocks of random variables, at least one with a subject variable and at least one without. The error can only occur when the block with the subject variable occurs first in ${\mathbf{rndm}}$. 
Severity  Critical 
Issue Since Mark  23 
Fixed at Mark  25 
Workaround  Ensure that blocks without subject variables appear in ${\mathbf{rndm}}$ before those with subject variables. 
Synopsis  In very rare cases, the function may become trapped in an infinite loop. 
Description  The function was affected by a bug in the underlying solver e04lbc (modified Newton method). In very rare cases the solver might get into an infinite loop. 
Severity  Critical 
Issue Since Mark  9 
Fixed at Mark  25 
Workaround  The bug can be avoided by switching to the other optimizer (SQP method e04ucc, ${\mathbf{iopt}}\left[4\right]=1$). 
Synopsis  A segmentation fault is likely to occur if a model with multiple random statements is supplied to the function, where at least one of those statements does not have a ${\mathbf{Subject}}$ term. 
Description  A segmentation fault is likely to occur if a model with multiple random statements is supplied to the function, where at least one of those statements does not have a ${\mathbf{Subject}}$ term.
For example, a model specified using:
V1 + V2 / SUBJECT = V3 V4 + V5 / SUBJECT = V6would not trigger the error, but one specified using: V1 + V2 V4 + V5 / SUBJECT = V6would. The error is not triggered when there is only a single random statement, so a model specified using just
V1 + V2will not trigger the error. 
Severity  Critical 
Issue Since Mark  27 
Fixed at Mark  27.1 
Workaround  A workaround to this issue is to always supply a ${\mathbf{Subject}}$ term. If the required model is of the form:
V1 + V2 V4 + V5 / SUBJECT = V6then you can specify an equivalent model by using: V1 + V2 / SUBJECT = DUMMY V4 + V5 / SUBJECT = V6where the variable 
Synopsis  Returns incorrect results when ${\mathbf{ntau}}>1$ and user supplied initial values for ${\mathbf{b}}$ are being used. 
Description  If ${\mathbf{ntau}}>1$, the optional parameter ${\mathbf{Calculate\; Initial\; Values}}=\mathrm{NO}$ is set, and the rows of array $B$ are not all identical, then the results returned by g02qgc are incorrect. 
Severity  Critical 
Issue Since Mark  23 
Fixed at Mark  24 
Workaround  Rather than call the function once with ${\mathbf{ntau}}>1$, call the function multiple times with ${\mathbf{ntau}}=1$, analysing a different value of ${\mathbf{tau}}$ on each call. 
Synopsis  Unexpected ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NE\_INTERNAL\_ERROR}}$s in g02zkc. 
Description  g02zkc may report a ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NE\_INTERNAL\_ERROR}}$ error for some valid minimum abbreviations of option names supplied in the input argument ${\mathbf{optstr}}$, e.g., when using ‘DEF’ instead of ‘Defaults’. 
Severity  Noncritical 
Issue Since Mark  23 
Fixed at Mark  26 
Workaround  Use the full options name, e.g., specify ‘Defaults’ rather than ‘Def’. 
Synopsis  Memory leak reported from CL interface on Windows from checking tools. 
Description  A duringexecution memory leak can be reported from checking tools on Windows when running a multithreaded program calling d01xbc. 
Severity  Critical 
Issue Since Mark  5 
Fixed at Mark  28 
Workaround  Ignore warnings. 
Synopsis  The wrong value for ${\mathbf{p}}$ is returned when ${\mathbf{aa2}}$ is large. 
Description  In g08ckc and g08clc the value returned for the upper tail probability ${\mathbf{p}}$ is wrong when the calculated AndersonDarling test statistic ${\mathbf{aa2}}$ is large. In the case of g08ckc, when ${\mathbf{aa2}}>153.4677$ the returned value of ${\mathbf{p}}$ should be zero; in the case of g08clc, when ${\mathbf{aa2}}>10.03$ the returned value of ${\mathbf{p}}$ should be $\text{}\le \mathrm{exp}\left(14.360135\right)$. 
Severity  Critical 
Issue Since Mark  23 
Workaround  Workaround for g08ckc:
Call g08ckc(...); If (aa2 > 153.4677) p = 0.0;Workaround for g08clc: Call g08clc(...); If (aa2 > 10.03) p = exp(14.360135); 
Synopsis  The methods ${\mathbf{smoother}}=\mathrm{Nag\_3RSSH}\text{}\text{and}\mathrm{Nag\_4253H}$ were implemented in reverse. 
Description  g10cac implements two methods of smoothing, ${\mathbf{smoother}}=\mathrm{Nag\_3RSSH}\text{}\text{and}\mathrm{Nag\_4253H}$. Unfortunately they were implemented in reverse, so if you ask for ${\mathbf{smoother}}=\mathrm{Nag\_3RSSH}$ you get ${\mathbf{smoother}}=\mathrm{Nag\_4253H}$ and vice versa. 
Severity  Critical 
Issue Since Mark  7 
Fixed at Mark  25 
Workaround  Use ${\mathbf{smoother}}=\mathrm{Nag\_3RSSH}$ if you want results for ${\mathbf{smoother}}=\mathrm{Nag\_4253H}$, and vice versa. 
Synopsis  g13fac may return a negative value as the estimate of the last $\beta $ parameter (i.e., ${\beta}_{p}$) for a subset of models. 
Description  g13fac can result in a negative value for the estimate of the last $\beta $ parameter (i.e., ${\beta}_{p}$) or, if $p=0$, the last $\alpha $ parameter (i.e., ${\alpha}_{q}$).
This issue only affects a subset of models that have normally distributed errors and do not include an asymmetry term.
If the function did not return a negative value as the estimate of the last $\beta $ parameter (or, if $p=0$, the last $\alpha $ parameter), then that particular model was not affected by the issue.

Severity  Critical 
Issue Since Mark  7 
Fixed at Mark  27 
Workaround  None 
Synopsis  When ${\mathbf{what}}=\mathrm{Nag\_VarianceComponent}$ the information returned in ${\mathbf{plab}}$ and/or ${\mathbf{vinfo}}$ may be incorrect. 
Description  The information returned in ${\mathbf{plab}}$ and/or ${\mathbf{vinfo}}$ may be incorrect in cases where ${\mathbf{what}}=\mathrm{Nag\_VarianceComponent}$ and the underlying linear mixed effects regression model has a random variable, with a single level (so either binary or continuous), that only takes the value zero. 
Severity  Noncritical 
Issue Since Mark  27.0 
Workaround  The work around is to drop the term from the model, as it does not contribute. For example, if the random part of your model was specified as: V1 + V2 / SUBJECT=V3 and the variable V2 was a continuous variable, that only takes a value of zero in the data, then this is equivalent to respecifying the model using: V1 / SUBJECT=V3. 
Synopsis  Thread Local Storage default limit was exceeded for delay loaded shared library. 
Description  A fair amount of thread local storage had been allocated by an auxiliary function which has now been updated to use a very small amount of thread local storage. Prior to the update, this only affected the case where the shared version of the Nag Library was delay loaded, since this assumed a small default maximum amount of thread local storage, which was in fact exceeded.
The issue had been present since the introduction of the auxiliary function at Mark 26.1. From Mark 28.6, the amount of thread local storage used is very small and this is no longer an issue.

Severity  Noncritical 
Issue Since Mark  26.1 
Fixed at Mark  28.6 
Workaround  None. 