# NAG CL Interfaceg02gbc (glm_​binomial)

Settings help

CL Name Style:

## 1Purpose

g02gbc fits a generalized linear model with binomial errors.

## 2Specification

 #include
 void g02gbc (Nag_Link link, Nag_IncludeMean mean, Integer n, const double x[], Integer tdx, Integer m, const Integer sx[], Integer ip, const double y[], const double binom_t[], const double wt[], const double offset[], double *dev, double *df, double b[], Integer *rank, double se[], double cov[], double v[], Integer tdv, double tol, Integer max_iter, Integer print_iter, const char *outfile, double eps, NagError *fail)
The function may be called by the names: g02gbc, nag_correg_glm_binomial or nag_glm_binomial.

## 3Description

A generalized linear model with binomial errors consists of the following elements:
1. (a)a set of $n$ observations, ${y}_{i}$, from a binomial distribution:
 $( t y ) π y (1-π) t-y .$
2. (b)$X$, a set of $p$ independent variables for each observation, ${x}_{1},{x}_{2},\dots ,{x}_{p}$.
3. (c)a linear model:
 $η = ∑ β j x j .$
4. (d)a link function $\eta =g\left(\mu \right)$, linking the linear predictor, $\eta$ and the mean of the distribution, $\mu =\pi t$.
The possible link functions are:
1. (i)logistic link: $\eta =\mathrm{log}\left(\frac{\mu }{t-\mu }\right)$,
2. (ii)probit link: $\eta ={\Phi }^{-1}\left(\frac{\mu }{t}\right)$,
3. (iii)complementary log-log link: $\mathrm{log}\left(-\mathrm{log}\left(1-\frac{\mu }{t}\right)\right)\text{.}$
5. (e)a measure of fit, the deviance:
 $∑ i=1 n dev( y i , μ ^ i ) = ∑ i=1 n 2 { y i log( y i μ ^ i )+( t i - y i )log( ( t i - y i ) ( t i - μ ^ i ) )}$
The linear arguments are estimated by iterative weighted least squares. An adjusted dependent variable, $z$, is formed:
 $z = η + (y-μ) dη dμ$
and a working weight, $w$,
 $w = (τ dη dμ ) 2 ​ where ​ τ = t μ (t-μ)$
At each iteration an approximation to the estimate of $\beta$, $\stackrel{^}{\beta }$ is found by the weighted least squares regression of $z$ on $X$ with weights $w$.
g02gbc finds a $QR$ decomposition of ${w}^{\frac{1}{2}}X$, i.e.,
• ${w}^{\frac{1}{2}}X=QR$ where $R$ is a $p×p$ triangular matrix and $Q$ is an $n×p$ column orthogonal matrix.
If $R$ is of full rank then $\stackrel{^}{\beta }$ is the solution to:
• $R\stackrel{^}{\beta }={Q}^{\mathrm{T}}{w}^{\frac{1}{2}}z$
If $R$ is not of full rank a solution is obtained by means of a singular value decomposition (SVD) of $R$.
 $R = Q * ( D 0 0 0 ) PT ,$
where $D$ is a $k×k$ diagonal matrix with nonzero diagonal elements, $k$ being the rank of $R$ and ${w}^{\frac{1}{2}}X$.
This gives the solution
 $β ^ = P 1 D −1 ( Q * 0 0 I ) QT w 1 2 z$
• ${P}_{1}$ being the first $k$ columns of $P$, i.e., $P=\left({P}_{1}{P}_{0}\right)$.
The iterations are continued until there is only a small change in the deviance.
The initial values for the algorithm are obtained by taking
 $η ^ = g (y)$
The fit of the model can be assessed by examining and testing the deviance, in particular, by comparing the difference in deviance between nested models, i.e., when one model is a sub-model of the other. The difference in deviance between two nested models has, asymptotically, a ${\chi }^{2}$ distribution with degrees of freedom given by the difference in the degrees of freedom associated with the two deviances.
The arguments estimates, $\stackrel{^}{\beta }$, are asymptotically Normally distributed with variance-covariance matrix:
• $C={R}^{-1}{R}^{-{1}^{\mathrm{T}}}$ in the full rank case, otherwise
• $C={P}_{1}{D}^{-2}{P}_{1}^{\mathrm{T}}$
The residuals and influence statistics can also be examined.
The estimated linear predictor $\stackrel{^}{\eta }=X\stackrel{^}{\beta }$, can be written as ${Hw}^{\frac{1}{2}}z$ for an $n×n$ matrix $H$. The $i$th diagonal elements of $H$, ${h}_{i}$, give a measure of the influence of the $i$th values of the independent variables on the fitted regression model. These are known as leverages.
The fitted values are given by $\stackrel{^}{\mu }={g}^{-1}\left(\stackrel{^}{\eta }\right)$.
g02gbc also computes the deviance residuals, $r$:
 $r i = sign( y i - μ ^ i ) dev( y i , μ ^ i ) .$
An option allows prior weights to be used with the model.
In many linear regression models the first term is taken as a mean term or an intercept, i.e., ${x}_{\mathit{i},1}=1$, for $\mathit{i}=1,2,\dots ,n$. This is provided as an option.
Often only some of the possible independent variables are included in a model; the facility to select variables to be included in the model is provided.
If part of the linear predictor can be represented by a variable with a known coefficient then this can be included in the model by using an offset, $o$:
 $η = o + ∑ β j x j .$
If the model is not of full rank the solution given will be only one of the possible solutions. Other estimates be may be obtained by applying constraints to the arguments. These solutions can be obtained by using g02gkc after using g02gbc.
Only certain linear combinations of the arguments will have unique estimates, these are known as estimable functions, these can be estimated and tested using g02gnc.
Details of the SVD, are made available, in the form of the matrix ${P}^{*}$:
 $P * = ( D −1 P1T P0T ) .$

## 4References

Cook R D and Weisberg S (1982) Residuals and Influence in Regression Chapman and Hall
Cox D R (1983) Analysis of Binary Data Chapman and Hall

## 5Arguments

On entry: indicates which link function is to be used.
${\mathbf{link}}=\mathrm{Nag_Logistic}$
A logistic link is used.
${\mathbf{link}}=\mathrm{Nag_Probit}$
A probit link is used.
${\mathbf{link}}=\mathrm{Nag_Compl}$
A complementary log-log link is used.
Constraint: ${\mathbf{link}}=\mathrm{Nag_Logistic}$, $\mathrm{Nag_Probit}$ or $\mathrm{Nag_Compl}$.
2: $\mathbf{mean}$Nag_IncludeMean Input
On entry: indicates if a mean term is to be included.
${\mathbf{mean}}=\mathrm{Nag_MeanInclude}$
A mean term, (intercept), will be included in the model.
${\mathbf{mean}}=\mathrm{Nag_MeanZero}$
The model will pass through the origin, zero point.
Constraint: ${\mathbf{mean}}=\mathrm{Nag_MeanInclude}$ or $\mathrm{Nag_MeanZero}$.
3: $\mathbf{n}$Integer Input
On entry: the number of observations, $n$.
Constraint: ${\mathbf{n}}\ge 2$.
4: $\mathbf{x}\left[{\mathbf{n}}×{\mathbf{tdx}}\right]$const double Input
On entry: ${\mathbf{x}}\left[\left(\mathit{i}-1\right)×{\mathbf{tdx}}+\mathit{j}-1\right]$ must contain the $\mathit{i}$th observation for the $\mathit{j}$th independent variable, for $\mathit{i}=1,2,\dots ,n$ and $\mathit{j}=1,2,\dots ,{\mathbf{m}}$.
5: $\mathbf{tdx}$Integer Input
On entry: the stride separating matrix column elements in the array x.
Constraint: ${\mathbf{tdx}}\ge {\mathbf{m}}$.
6: $\mathbf{m}$Integer Input
On entry: the total number of independent variables.
Constraint: ${\mathbf{m}}\ge 1$.
7: $\mathbf{sx}\left[{\mathbf{m}}\right]$const Integer Input
On entry: indicates which independent variables are to be included in the model. If ${\mathbf{sx}}\left[j-1\right]>0$, then the variable contained in the $j$th column of x is included in the regression model.
Constraints:
• ${\mathbf{sx}}\left[\mathit{j}-1\right]\ge 0$, for $\mathit{j}=1,2,\dots ,{\mathbf{m}}$;
• if ${\mathbf{mean}}=\mathrm{Nag_MeanInclude}$, then exactly ${\mathbf{ip}}-1$ values of sx must be $>0$;
• if ${\mathbf{mean}}=\mathrm{Nag_MeanZero}$, then exactly ip values of sx must be $>0$.
8: $\mathbf{ip}$Integer Input
On entry: the number $p$ of independent variables in the model, including the mean or intercept if present.
Constraint: ${\mathbf{ip}}>0$.
9: $\mathbf{y}\left[{\mathbf{n}}\right]$const double Input
On entry: observations on the dependent variable, ${\mathit{y}}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$.
Constraint: $0.0\le {\mathbf{y}}\left[\mathit{i}-1\right]\le {\mathbf{binom_t}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,n$.
10: $\mathbf{binom_t}\left[{\mathbf{n}}\right]$const double Input
On entry: the binomial denominator, $t$.
Constraint: ${\mathbf{binom_t}}\left[\mathit{i}\right]\ge 0.0$, for $\mathit{i}=1,2,\dots ,n$.
11: $\mathbf{wt}\left[{\mathbf{n}}\right]$const double Input
On entry: if weighted estimates are required, then wt must contain the weights to be used. Otherwise wt need not be defined and may be set to NULL.
If ${\mathbf{wt}}\left[i-1\right]=0.0$, then the $i$th observation is not included in the model, in which case the effective number of observations is the number of observations with positive weights.
If wt is NULL, then the effective number of observations is $n$.
Constraint: ${\mathbf{wt}}\phantom{\rule{0.25em}{0ex}}\text{is}\phantom{\rule{0.25em}{0ex}}\mathbf{NULL}$ or ${\mathbf{wt}}\left[\mathit{i}-1\right]\ge 0.0$, for $\mathit{i}=1,2,\dots ,n$.
12: $\mathbf{offset}\left[{\mathbf{n}}\right]$const double Input
On entry: if an offset is required then offset must contain the values of the offset $o$. Otherwise offset must be supplied as NULL.
13: $\mathbf{dev}$double * Output
On exit: the deviance for the fitted model.
14: $\mathbf{df}$double * Output
On exit: the degrees of freedom associated with the deviance for the fitted model.
15: $\mathbf{b}\left[{\mathbf{ip}}\right]$double Output
On exit: ${\mathbf{b}}\left[i-1\right]$, $i=1,\dots ,{\mathbf{ip}}$ contains the estimates of the arguments of the generalized linear model, $\stackrel{^}{\beta }$.
If ${\mathbf{mean}}=\mathrm{Nag_MeanInclude}$, then ${\mathbf{b}}\left[0\right]$ will contain the estimate of the mean argument and ${\mathbf{b}}\left[i\right]$ will contain the coefficient of the variable contained in column $j$ of x, where ${\mathbf{sx}}\left[j-1\right]$ is the $i$th positive value in the array sx.
If ${\mathbf{mean}}=\mathrm{Nag_MeanZero}$, then ${\mathbf{b}}\left[i-1\right]$ will contain the coefficient of the variable contained in column $j$ of x, where ${\mathbf{sx}}\left[j-1\right]$ is the $i$th positive value in the array sx.
16: $\mathbf{rank}$Integer * Output
On exit: the rank of the independent variables.
If the model is of full rank, then ${\mathbf{rank}}={\mathbf{ip}}$.
If the model is not of full rank, then rank is an estimate of the rank of the independent variables. rank is calculated as the number of singular values greater than ${\mathbf{eps}}×$ (largest singular value). It is possible for the SVD to be carried out but rank to be returned as ip.
17: $\mathbf{se}\left[{\mathbf{ip}}\right]$double Output
On exit: the standard errors of the linear arguments.
${\mathbf{se}}\left[\mathit{i}-1\right]$ contains the standard error of the parameter estimate in ${\mathbf{b}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{ip}}$.
18: $\mathbf{cov}\left[{\mathbf{ip}}×\left({\mathbf{ip}}+1\right)/2\right]$double Output
On exit: the ${\mathbf{ip}}×\left({\mathbf{ip}}+1\right)/2$ elements of cov contain the upper triangular part of the variance-covariance matrix of the ip parameter estimates given in b. They are stored packed by column, i.e., the covariance between the parameter estimate given in ${\mathbf{b}}\left[\mathit{i}\right]$ and the parameter estimate given in ${\mathbf{b}}\left[\mathit{j}\right]$, $\mathit{j}\ge \mathit{i}$, is stored in ${\mathbf{cov}}\left[\mathit{j}\left(\mathit{j}+1\right)/2+\mathit{i}\right]$, for $\mathit{i}=0,1,\dots ,{\mathbf{ip}}-1$ and $\mathit{j}=\mathit{i},\dots ,{\mathbf{ip}}-1$.
19: $\mathbf{v}\left[{\mathbf{n}}×{\mathbf{tdv}}\right]$double Output
On exit: auxiliary information on the fitted model.
${\mathbf{v}}\left[\left(\mathit{i}-1\right)×{\mathbf{tdv}}\right]$, contains the linear predictor value, ${\eta }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$.
${\mathbf{v}}\left[\left(\mathit{i}-1\right)×{\mathbf{tdv}}+1\right]$, contains the fitted value, ${\stackrel{^}{\mu }}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$.
${\mathbf{v}}\left[\left(\mathit{i}-1\right)×{\mathbf{tdv}}+2\right]$, contains the variance standardization, ${\tau }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$.
${\mathbf{v}}\left[\left(\mathit{i}-1\right)×{\mathbf{tdv}}+3\right]$, contains the working weight, ${w}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$.
${\mathbf{v}}\left[\left(\mathit{i}-1\right)×{\mathbf{tdv}}+4\right]$, contains the deviance residual, ${r}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$.
${\mathbf{v}}\left[\left(\mathit{i}-1\right)×{\mathbf{tdv}}+5\right]$, contains the leverage, ${h}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$.
${\mathbf{v}}\left[\left(i-1\right)×{\mathbf{tdv}}+\mathit{j}-1\right]$, for $\mathit{j}=7,8,\dots ,{\mathbf{ip}}+6$, contains the results of the $QR$ decomposition or the singular value decomposition.
If the model is not of full rank, i.e., ${\mathbf{rank}}<{\mathbf{ip}}$, then the first ip rows of columns $7$ to ${\mathbf{ip}}+6$ contain the ${P}^{*}$ matrix.
20: $\mathbf{tdv}$Integer Input
On entry: the stride separating matrix column elements in the array v.
Constraint: ${\mathbf{tdv}}\ge {\mathbf{ip}}+6$.
21: $\mathbf{tol}$double Input
On entry: indicates the accuracy required for the fit of the model.
The iterative weighted least squares procedure is deemed to have converged if the absolute change in deviance between interactions is less than ${\mathbf{tol}}×$ (1.0+Current Deviance). This is approximately an absolute precision if the deviance is small and a relative precision if the deviance is large.
If $0.0\le {\mathbf{tol}}<$ machine precision, then the function will use $10×$ machine precision.
Constraint: ${\mathbf{tol}}\ge 0.0$.
22: $\mathbf{max_iter}$Integer Input
On entry: the maximum number of iterations for the iterative weighted least squares.
If ${\mathbf{max_iter}}=0$, then a default value of 10 is used.
Constraint: ${\mathbf{max_iter}}\ge 0$.
On entry: indicates if the printing of information on the iterations is required and the rate at which printing is produced.
${\mathbf{print_iter}}\le 0$
There is no printing.
${\mathbf{print_iter}}>0$
The following items are printed every print_iter iterations:
1. (i)the deviance,
2. (ii)the current estimates, and
3. (iii)if the weighted least squares equations are singular then this is indicated.
24: $\mathbf{outfile}$const char * Input
On entry: a null terminated character string giving the name of the file to which results should be printed. If outfile is NULL or an empty string then the stdout stream is used. Note that the file will be opened in the append mode.
25: $\mathbf{eps}$double Input
On entry: the value of eps is used to decide if the independent variables are of full rank and, if not, what the rank of the independent variables is. The smaller the value of eps the stricter the criterion for selecting the singular value decomposition.
If $0.0\le {\mathbf{eps}}<$ machine precision, then the function will use machine precision instead.
Constraint: ${\mathbf{eps}}\ge 0.0$.
26: $\mathbf{fail}$NagError * Input/Output
The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface).

## 6Error Indicators and Warnings

NE_2_INT_ARG_LT
On entry, ${\mathbf{tdv}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{ip}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{tdv}}\ge {\mathbf{ip}}+6$.
On entry, ${\mathbf{tdx}}=⟨\mathit{\text{value}}⟩$ while ${\mathbf{m}}=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{tdx}}\ge {\mathbf{m}}$.
NE_2_REAL_ARG_GT
On entry, ${\mathbf{y}}\left[⟨\mathit{\text{value}}⟩\right]=⟨\mathit{\text{value}}⟩$ while ${\mathbf{binom_t}}\left[⟨\mathit{\text{value}}⟩\right]=⟨\mathit{\text{value}}⟩$. These arguments must satisfy ${\mathbf{y}}\left[⟨\mathit{\text{value}}⟩\right]\le {\mathbf{binom_t}}\left[⟨\mathit{\text{value}}⟩\right]$.
NE_ALLOC_FAIL
Dynamic memory allocation failed.
On entry, argument link had an illegal value.
On entry, argument mean had an illegal value.
NE_INT_ARG_LT
On entry, ${\mathbf{ip}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{ip}}\ge 1$.
On entry, ${\mathbf{m}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{m}}\ge 1$.
On entry, max_iter must not be less than 0: ${\mathbf{max_iter}}=⟨\mathit{\text{value}}⟩$.
On entry, ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{n}}\ge 2$.
On entry, ${\mathbf{sx}}\left[⟨\mathit{\text{value}}⟩\right]$ must not be less than 0: ${\mathbf{sx}}\left[⟨\mathit{\text{value}}⟩\right]=⟨\mathit{\text{value}}⟩$.
NE_IP_GT_OBSERV
Parameter ip is greater than the effective number of observations.
NE_IP_INCOMP_SX
Parameter ip is incompatible with arguments mean and sx.
NE_LSQ_ITER_NOT_CONV
The iterative weighted least squares has failed to converge in ${\mathbf{max_iter}}=⟨\mathit{\text{value}}⟩$ iterations. The value of max_iter could be increased but it may be advantageous to examine the convergence using the print_iter option. This may indicate that the convergence is slow because the solution is at a boundary in which case it may be better to reformulate the model.
NE_NOT_APPEND_FILE
Cannot open file $⟨\mathit{string}⟩$ for appending.
NE_NOT_CLOSE_FILE
Cannot close file $⟨\mathit{string}⟩$.
NE_RANK_CHANGED
The rank of the model has changed during the weighted least squares iterations. The estimate for $\beta$ returned may be reasonable, but you should check how the deviance has changed during iterations.
NE_REAL_ARG_LT
On entry, ${\mathbf{binom_t}}\left[⟨\mathit{\text{value}}⟩\right]$ must not be less than 0.0: ${\mathbf{binom_t}}\left[⟨\mathit{\text{value}}⟩\right]=⟨\mathit{\text{value}}⟩$.
On entry, eps must not be less than 0.0: eps $\text{}=⟨\mathit{\text{value}}⟩$.
On entry, tol must not be less than 0.0: ${\mathbf{tol}}=⟨\mathit{\text{value}}⟩$.
On entry, ${\mathbf{wt}}\left[⟨\mathit{\text{value}}⟩\right]$ must not be less than 0.0: ${\mathbf{wt}}\left[⟨\mathit{\text{value}}⟩\right]=⟨\mathit{\text{value}}⟩$.
On entry, ${\mathbf{y}}\left[⟨\mathit{\text{value}}⟩\right]$ must not be less than 0.0: ${\mathbf{y}}\left[⟨\mathit{\text{value}}⟩\right]=⟨\mathit{\text{value}}⟩$.
NE_SVD_NOT_CONV
The singular value decomposition has failed to converge.
NE_VALUE_AT_BOUNDARY_B
A fitted value is at a boundary, i.e., $0.0$ or $1.0$. This may occur if there are y values of $0.0$ or binom_t and the model is too complex for the data. The model should be reformulated with, perhaps, some observations dropped.
NE_ZERO_DOF_ERROR
The degrees of freedom for error are $0$. A saturated model has been fitted.

## 7Accuracy

The accuracy is determined by tol as described in Section 5. As the adjusted deviance is a function of $\mathrm{log}\mu$ the accuracy of the $\stackrel{^}{\beta }$'s will be a function of tol. tol should, therefore, be set to a smaller value than the accuracy required for $\stackrel{^}{\beta }$.

## 8Parallelism and Performance

Background information to multithreading can be found in the Multithreading documentation.
g02gbc is not threaded in any implementation.

None.

## 10Example

A linear trend $\left(x=-1,0,1\right)$ is fitted to data relating the incidence of carriers of Streptococcus pyogenes to size of tonsils. The data is described in Cox (1983).

### 10.1Program Text

Program Text (g02gbce.c)

### 10.2Program Data

Program Data (g02gbce.d)

### 10.3Program Results

Program Results (g02gbce.r)