The roots are located using a modified form of Laguerre's method, as implemented by Cameron (2018).
An implicit deflation strategy is employed, which allows for high accuracy even when solving high degree polynomials.
Linear ($n=1$) and quadratic ($n=2$) problems are obtained via the 'standard' closed formulae.
First, initial estimates of the roots are made using a method proposed by Bini (1996), which selects complex numbers along circles of suitable radii.
Updates to each root approximation
${z}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n$, are then made using the iterative formula from Petković et al. (1997):
The nearest ${\hat{z}}_{j}$ to the current approximation is used, by selecting the sign that maximizes the denominator of the correction term. The subtraction of the sum terms when computing ${G}_{j}$ and ${H}_{j}$ determines the implicit deflation strategy of the modified Laguerre method.
The relative backward error of a root approximation ${z}_{j}$ is given by
A root approximation is deemed to have converged if $\eta \left({z}_{j}\right)\le 2\text{}\times $machine precision, at which point updates of that root cease. If the stopping criterion holds, then the computed root is the exact root of a polynomial whose coefficients are no more perturbed than the floating-point computation of $P\left({z}_{j}\right)$.
The condition number of each root is also computed, as a measure of sensitivity to changes in the coefficients of the polynomial. It is given by
Root approximations can be further refined with optional 'polishing' processes. A simple polishing process is provided that carries out a single iteration of Newton's method, which proves quick yet often effective. Alternatively, a compensated polishing process from Cameron and O'Neill (2019) can be applied. This iterative method combines the implicit deflation of the modified Laguerre method, with the accuracy of evaluating polynomials and their derivatives using the compensated Horner's method from Graillat et al. (2005). Compensated polishing yields approximations with a limiting accuracy as if computed in twice the working precision.
It is recommended that you read Section 9.1 for advice on selecting an appropriate polishing process.
4References
Bini D A
(1996)
Numerical computation of polynomial zeros by means of Aberth's method
Numerical Algorithms13
179–200
Springer US
Cameron T R
(2018)
An effective implementation of a modified Laguerre method for the roots of a polynomial
Numerical Algorithms
Springer US
https://doi.org/10.1007/s11075-018-0641-9
Cameron T R and
O'Neill A
(2019)
On a compensated polishing technique for polynomial root solvers
To be published
Graillat S,
Louvet N, and
Langlois P
(2005)
Compensated Horner scheme
Technical Report
Université de Perpignan Via Domitia
Petković M,
Ilić S, and
Tričković S
(1997)
A family of simultaneous zero-finding methods
Computers & Mathematics with Applications (Volume 34)10
49–59
https://doi.org/10.1016/S0898-1221(97)00206-X
Wilkinson J H
(1959)
The evaluation of the zeros of ill-conditioned polynomials. Part I
Numerische Mathematik (Volume 1)1
150–166
Springer-Verlag
On exit: ${\mathbf{berr}}\left[\mathit{j}-1\right]$ holds the relative backward error, $\eta \left({z}_{\mathit{j}}\right)$, for $\mathit{j}=1,2,\dots ,n$.
On exit: ${\mathbf{conv}}\left[\mathit{j}-1\right]$ indicates the convergence status of the root approximation, ${z}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n$.
${\mathbf{conv}}\left[j-1\right]\ge 0$
Successfully converged after ${\mathbf{conv}}\left[j-1\right]$ iterations.
On entry, ${\mathbf{n}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{n}}\ge 1$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
See Section 7.5 in the Introduction to the NAG Library CL Interface for further information.
NE_NO_LICENCE
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library CL Interface for further information.
NE_OVERFLOW
c02aac encountered overflow during at least one root approximation.
Check conv and consider scaling the polynomial (see Section 9.2).
7Accuracy
All roots are evaluated as accurately as possible, but because of the inherent nature of the problem complete accuracy cannot be guaranteed.
8Parallelism and Performance
c02aac is not threaded in any implementation.
9Further Comments
9.1Selecting a Polishing Process
The choice of polishing technique ultimately depends on two factors: how well conditioned the problem is, and a preference between run time and accuracy. For a detailed analysis of the polishing techniques, see Cameron and O'Neill (2019).
Well-conditioned Problems
Simple polishing is effective in reducing the error in approximations of well-conditioned roots, doing so with a negligible increase in run time. Compensated polishing has comparable accuracy, but it is approximately ten times slower than when using simple polishing.
Simple polishing (${\mathbf{polish}}=\mathrm{Nag\_Root\_Polish\_Simple}$) is recommended for well-conditioned problems.
Ill-conditioned Problems
There is a dramatic difference in accuracy between the two polishing techniques for ill-conditioned polynomials. Unpolished approximations are inaccurate and simple polishing often proves ineffective. However, compensated polishing is able to reduce errors by several orders of magnitude.
Compensated polishing (${\mathbf{polish}}=\mathrm{Nag\_Root\_Polish\_Compensated}$) is highly recommended for ill-conditioned problems.
9.2Scaling the Polynomial
c02aac attempts to avoid overflow conditions where possible. However, if the function fails with ${\mathbf{fail}}\mathbf{.}\mathbf{code}=$NE_OVERFLOW, such conditions could not be avoided for the given polynomial. Use conv to identify the roots for which overflow occurred, as other approximations may still have succeeded.
Extremely large and/or small coefficients are likely to be the cause of overflow failures. In such cases, you are recommended to scale the independent variable $\left(z\right)$ so that the disparity between the largest and smallest coefficient in magnitude is reduced. That is, use the function to locate the zeros of the polynomial $sP\left(cz\right)$ for some suitable values of $c$ and $s$. For example, if the original polynomial was $P\left(z\right)={2}^{-100}i+{2}^{100}{z}^{20}$, then choosing $c={2}^{-10}$ and $s={2}^{100}$, for instance, would yield the scaled polynomial $i+{z}^{20}$, which is well-behaved relative to overflow and has zeros which are ${2}^{10}$ times those of $P\left(z\right)$.
10Example
The example program for c02aac demonstrates two problems, given in the functions ex1_basic and ex2_polishing. (Note that by default, the second example is switched off because the results may be machine dependent. Edit the program in the obvious way to switch it on.)
where
${a}_{0}=(5.0+6.0i)$,
${a}_{1}=(30.0+20.0i)$,
${a}_{2}=-(0.2+6.0i)$,
${a}_{3}=(50.0+100000.0i)$,
${a}_{4}=-(2.0-40.0i)$ and
${a}_{5}=(10.0+1.0i)$.
Example 2: Polishing Processes
This example finds the roots of a polynomial of the form
$$(z-1)(z-2)\cdots (z-n)=0\text{,}$$
first proposed by Wilkinson (1959) as an example of polynomials with ill-conditioned roots, that are sensitive to small changes in the coefficients. A polishing mode is demonstrated with $n=10$, and the maximum forward and relative backward errors of the approximations displayed.