The roots are located using a modified form of Laguerre's method, as implemented by Cameron (2018).
An implicit deflation strategy is employed, which allows for high accuracy even when solving high degree polynomials.
Linear ($n=1$) and quadratic ($n=2$) problems are obtained via the 'standard' closed formulae.
First, initial estimates of the roots are made using a method proposed by Bini (1996), which selects complex numbers along circles of suitable radii.
Updates to each root approximation
${z}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n$, are then made using the iterative formula from Petković et al. (1997):
The nearest ${\hat{z}}_{j}$ to the current approximation is used, by selecting the sign that maximizes the denominator of the correction term. The subtraction of the sum terms when computing ${G}_{j}$ and ${H}_{j}$ determines the implicit deflation strategy of the modified Laguerre method.
The relative backward error of a root approximation ${z}_{j}$ is given by
A root approximation is deemed to have converged if $\eta \left({z}_{j}\right)\le 2\text{}\times $machine precision, at which point updates of that root cease. If the stopping criterion holds, then the computed root is the exact root of a polynomial whose coefficients are no more perturbed than the floating-point computation of $P\left({z}_{j}\right)$.
The condition number of each root is also computed, as a measure of sensitivity to changes in the coefficients of the polynomial. It is given by
Root approximations can be further refined with optional 'polishing' processes. A simple polishing process is provided that carries out a single iteration of Newton's method, which proves quick yet often effective. Alternatively, a compensated polishing process from Cameron and O'Neill (2019) can be applied. This iterative method combines the implicit deflation of the modified Laguerre method, with the accuracy of evaluating polynomials and their derivatives using the compensated Horner's method from Graillat et al. (2005). Compensated polishing yields approximations with a limiting accuracy as if computed in twice the working precision.
It is recommended that you read Section 9.1 for advice on selecting an appropriate polishing process.
4References
Bini D A
(1996)
Numerical computation of polynomial zeros by means of Aberth's method
Numerical Algorithms13
179–200
Springer US
Cameron T R
(2018)
An effective implementation of a modified Laguerre method for the roots of a polynomial
Numerical Algorithms
Springer US
https://doi.org/10.1007/s11075-018-0641-9
Cameron T R and
O'Neill A
(2019)
On a compensated polishing technique for polynomial root solvers
To be published
Graillat S,
Louvet N, and
Langlois P
(2005)
Compensated Horner scheme
Technical Report
Université de Perpignan Via Domitia
Petković M,
Ilić S, and
Tričković S
(1997)
A family of simultaneous zero-finding methods
Computers & Mathematics with Applications (Volume 34)10
49–59
https://doi.org/10.1016/S0898-1221(97)00206-X
Wilkinson J H
(1959)
The evaluation of the zeros of ill-conditioned polynomials. Part I
Numerische Mathematik (Volume 1)1
150–166
Springer-Verlag
On exit: ${\mathbf{z}}\left(\mathit{j}\right)$ holds the approximation of the root ${z}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n$.
6: $\mathbf{berr}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput
On exit: ${\mathbf{berr}}\left(\mathit{j}\right)$ holds the relative backward error, $\eta \left({z}_{\mathit{j}}\right)$, for $\mathit{j}=1,2,\dots ,n$.
7: $\mathbf{cond}\left({\mathbf{n}}\right)$ – Real (Kind=nag_wp) arrayOutput
On exit: ${\mathbf{cond}}\left(\mathit{j}\right)$ holds the condition number, $\kappa \left({z}_{\mathit{j}}\right)$, for $\mathit{j}=1,2,\dots ,n$.
On exit: ${\mathbf{conv}}\left(\mathit{j}\right)$ indicates the convergence status of the root approximation, ${z}_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n$.
${\mathbf{conv}}\left(j\right)\ge 0$
Successfully converged after ${\mathbf{conv}}\left(j\right)$ iterations.
Note:
if ${\mathbf{polish}}=2$, conv refers to convergence in the compensated polishing process.
9: $\mathbf{ifail}$ – IntegerInput/Output
On entry: ifail must be set to $0$, $\mathrm{-1}$ or $1$ to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of $0$ causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of $\mathrm{-1}$ means that an error message is printed while a value of $1$ means that it is not.
If halting is not appropriate, the value $\mathrm{-1}$ or $1$ is recommended. If message printing is undesirable, then the value $1$ is recommended. Otherwise, the value $0$ is recommended. When the value $-\mathbf{1}$ or $\mathbf{1}$ is used it is essential to test the value of ifail on exit.
On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).
6Error Indicators and Warnings
If on entry ${\mathbf{ifail}}=0$ or $\mathrm{-1}$, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
${\mathbf{ifail}}=1$
On entry, the complex variable ${\mathbf{a}}\left(0\right)=(0.0,0.0)$.
${\mathbf{ifail}}=2$
On entry, ${\mathbf{n}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{n}}\ge 1$.
${\mathbf{ifail}}=3$
On entry, ${\mathbf{itmax}}=\u27e8\mathit{\text{value}}\u27e9$. Constraint: ${\mathbf{itmax}}\ge 1$.
${\mathbf{ifail}}=4$
On entry, ${\mathbf{polish}}=\u27e8\mathit{\text{value}}\u27e9$. Constraint: ${\mathbf{polish}}=0$, $1$ or $2$.
${\mathbf{ifail}}=5$
Convergence has failed for at least one root approximation.
c02aaf encountered overflow during at least one root approximation.
Check conv and consider scaling the polynomial (see Section 9.2).
${\mathbf{ifail}}=-99$
An unexpected error has been triggered by this routine. Please
contact NAG.
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.
7Accuracy
All roots are evaluated as accurately as possible, but because of the inherent nature of the problem complete accuracy cannot be guaranteed.
8Parallelism and Performance
Background information to multithreading can be found in the Multithreading documentation.
c02aaf is not threaded in any implementation.
9Further Comments
9.1Selecting a Polishing Process
The choice of polishing technique ultimately depends on two factors: how well conditioned the problem is, and a preference between run time and accuracy. For a detailed analysis of the polishing techniques, see Cameron and O'Neill (2019).
Well-conditioned Problems
Simple polishing is effective in reducing the error in approximations of well-conditioned roots, doing so with a negligible increase in run time. Compensated polishing has comparable accuracy, but it is approximately ten times slower than when using simple polishing.
Simple polishing (${\mathbf{polish}}=1$) is recommended for well-conditioned problems.
Ill-conditioned Problems
There is a dramatic difference in accuracy between the two polishing techniques for ill-conditioned polynomials. Unpolished approximations are inaccurate and simple polishing often proves ineffective. However, compensated polishing is able to reduce errors by several orders of magnitude.
Compensated polishing (${\mathbf{polish}}=2$) is highly recommended for ill-conditioned problems.
9.2Scaling the Polynomial
c02aaf attempts to avoid overflow conditions where possible. However, if the routine fails with ${\mathbf{ifail}}={\mathbf{6}}$, such conditions could not be avoided for the given polynomial. Use conv to identify the roots for which overflow occurred, as other approximations may still have succeeded.
Extremely large and/or small coefficients are likely to be the cause of overflow failures. In such cases, you are recommended to scale the independent variable $\left(z\right)$ so that the disparity between the largest and smallest coefficient in magnitude is reduced. That is, use the routine to locate the zeros of the polynomial $sP\left(cz\right)$ for some suitable values of $c$ and $s$. For example, if the original polynomial was $P\left(z\right)={2}^{\mathrm{-100}}i+{2}^{100}{z}^{20}$, then choosing $c={2}^{\mathrm{-10}}$ and $s={2}^{100}$, for instance, would yield the scaled polynomial $i+{z}^{20}$, which is well-behaved relative to overflow and has zeros which are ${2}^{10}$ times those of $P\left(z\right)$.
10Example
The example program for c02aaf demonstrates two problems, given in the subroutines ex1_basic and ex2_polishing. (Note that by default, the second example is switched off because the results may be machine dependent. Edit the program in the obvious way to switch it on.)
where
${a}_{0}=(5.0+6.0i)$,
${a}_{1}=(30.0+20.0i)$,
${a}_{2}=-(0.2+6.0i)$,
${a}_{3}=(50.0+100000.0i)$,
${a}_{4}=-(2.0-40.0i)$ and
${a}_{5}=(10.0+1.0i)$.
Example 2: Polishing Processes
This example finds the roots of a polynomial of the form
$$(z-1)(z-2)\cdots (z-n)=0\text{,}$$
first proposed by Wilkinson (1959) as an example of polynomials with ill-conditioned roots, that are sensitive to small changes in the coefficients. A polishing mode is demonstrated with $n=10$, and the maximum forward and relative backward errors of the approximations displayed.