NAG Library Chapter Introduction
G10 (smooth)
Smoothing in Statistics
1
Scope of the Chapter
This chapter is concerned with methods for smoothing data. Included are methods for density estimation, smoothing time series data, and statistical applications of splines. These methods may also be viewed as nonparametric modelling.
2
Background to the Problems
2.1
Smoothing Methods
Many of the methods used in statistics involve fitting a model, the form of which is determined by a small number of parameters, for example, a distribution model like the gamma distribution, a linear regression model or an autoregression model in time series. In these cases the fitting involves the estimation of the small number of parameters from the data. In modelling data with these models there are two important stages in addition to the estimation of the parameters; these are the identification of a suitable model, for example, the selection of a gamma distribution rather than a Weibull distribution, and the checking to see if the fitted model adequately fits the data. While these parametric models can be fairly flexible, they will not adequately fit all datasets, especially if the number of parameters is to be kept small.
Alternative models based on smoothing can be used. These models will not be written explicitly in terms of parameters. They are sufficiently flexible for a much wider range of situations than parametric models. The main requirement for such a model to be suitable is that the underlying models would be expected to be smooth, so excluding those situations where, for example, a step function would be expected.
These smoothing methods can be used in a variety of ways, for example:
1. |
producing smoothed plots to aid understanding; |
2. |
identifying of a suitable parametric model from the shape of the smoothed data; |
3. |
eliminating complex effects that are not of direct interest so that attention can be focused on the effects of interest. |
Several smoothing techniques make use of a smoothing parameter which can be either chosen by you or estimated from the data. The smoothing parameter balances the two criterion of smoothness of the fitted model and the closeness of the fit of the model to the data. Generally, the larger the smoothing parameter is, the smoother the fitted model will be, but for small values of the smoothing parameter the model will closely follow the data, and for large values the fit will be poorer.
The smoothing parameter can be either chosen using previous experience of a suitable value for such data, or estimated from the data. The estimation can be either formal, using a criterion such as the cross-validation, or informal by trying different values and examining the result by means of suitable graphs.
Smoothing methods can be used in three important areas of of statistics: regression modelling, distribution modelling and time series modelling.
2.2
Smoothing Splines and Regression Models
For a set of observations (), , the spline provides a flexible smooth function for situations in which a simple polynomial or nonlinear regression model is not suitable.
Cubic smoothing splines arise as the function,
, with continuous first derivative which minimizes
where
is the (optional) weight for the
th observation and
is the smoothing parameter. This criterion consists of two parts: the first measures the fit of the curve and the second the smoothness of the curve. The value of the smoothing parameter,
, weights these two aspects: larger values of
give a smoother fitted curve but, in general, a poorer fit.
Splines are linear smoothers since the fitted values,
, can be written as a linear function of the observed values
, that is,
for a matrix
. The degrees of freedom for the spline is
giving residual degrees of freedom
The diagonal elements of
,
, are the leverages.
The parameter
can be estimated in a number of ways.
1. |
The degrees of freedom for the spline can be specified, i.e., find such that for given . |
2. |
Minimize the cross-validation (CV), i.e., find such that the CV is minimized, where
|
3. |
Minimize generalized cross-validation (GCV), i.e., find such that the GCV is minimized, where
|
2.3
Density Estimation
The object of density estimation is to produce from a set of observations a smooth nonparametric estimate of the unknown density function from which the observations were drawn. That is, given a sample of
observations,
,
, from a distribution with unknown density function,
, find an estimate of the density function,
. The simplest form of density estimator is the histogram; this may be defined by
where
is the number of observations falling in the interval
to
,
is the lower bound of the histogram and
is the upper bound. The value
is known as the window width. A simple development of this estimator would be the running histogram estimator
where
is the number of observations falling in the interval
. This estimator can be written as
for a function
where
The function
can be considered as a kernel function. To produce a smoother density estimate, the kernel function,
, which satisfies the following conditions can be used:
The kernel density estimator is therefore defined as
The choice of
is usually not important, but to ease the computational burden use can be made of the Gaussian kernel defined as
The smoothness of the estimator,
, depends on the window width,
. In general, the larger the value
is, the smoother the resulting density estimate is. There is, however, the problem of oversmoothing when the value of
is too large and essential features of the distribution function are removed. For example, if the distribution was bimodal, a large value of
may result in a unimodal estimate. The value of
has to be chosen such that the essential shape of the distribution is retained while effects due only to the observed sample are smoothed out. The choice of
can be aided by looking at plots of the density estimate for different values of
, or by using cross-validation methods; see
Silverman (1990).
Silverman (1990) shows how the Gaussian kernel density estimator can be computed using a fast Fourier transform (FFT).
2.4
Smoothers for Time Series
If the data consists of a sequence of
observations recorded at equally spaced intervals, usually a time series, several robust smoothers are available. The fitted curve is intended to be robust to any outlying observations in the sequence, hence the techniques employed primarily make use of medians rather than means. These ideas come from the field of exploratory data analysis (EDA); see
Tukey (1977) and
Velleman and Hoaglin (1981). The smoothers are based on the use of running medians to summarise overlapping segments; these provide a simple but flexible curve.
In EDA terminology, the fitted curve and the residuals are called the smooth and the rough respectively, so that
Using the notation of Tukey, one of the smoothers commonly used is 4253H,twice. This consists of a running median of
, then
, then
, then
. This is then followed by what is known as hanning. Hanning is a running weighted mean, the weights being
,
and
. The result of this smoothing is then ‘reroughed’. This involves computing residuals from the computed fit, applying the same smoother to the residuals and adding the result to the smooth of the first pass.
3
Recommendations on Choice and Use of Available Routines
The following routines fit smoothing splines:
- g10abf computes a cubic smoothing spline for a given value of the smoothing parameter. The results returned include the values of leverages and the coefficients of the cubic spline. Options allow only parts of the computation to be performed when the routine is used to estimate the value of the smoothing parameter or as when it is part of an iterative procedure such as that used in fitting generalized additive models; see Hastie and Tibshirani (1990).
- g10acf estimates the value of the smoothing parameter using one of three criteria and fits the cubic smoothing spline using that value.
g10abf and
g10acf require the
to be strictly increasing. If two or more observations have the same
-value then they should be replaced by a single observation with
equal to the (weighted) mean of the
values and weight,
, equal to the sum of the weights. This operation can be performed by
g10zaf.
The following routine produces an estimate of the density function:
- g10bbf computes a density estimate using a Normal kernel.
The following routine produces a smoothed estimate for a time series:
- g10caf computes a smoothed series using running median smoothers.
The following service routine is also available:
- g10zaf orders and weights the input data to produce a dataset strictly monotonic in .
4
Functionality Index
Compute smoothed data sequence, | | |
running median smoothers | | g10caf |
Fit cubic smoothing spline, | | |
smoothing parameter estimated | | g10acf |
smoothing parameter given | | g10abf |
Kernel density estimation, | | |
Gaussian kernel, thread safe | | g10bbf |
Reorder data to give ordered distinct observations | | g10zaf |
5
Auxiliary Routines Associated with Library Routine Arguments
None.
6
Routines Withdrawn or Scheduled for Withdrawal
The following lists all those routines that have been withdrawn since Mark 19 of the Library or are scheduled for withdrawal at one of the next two marks.
Withdrawn Routine | Mark of Withdrawal | Replacement Routine(s) |
g10baf | 27 | g10bbf |
7
References
Hastie T J and Tibshirani R J (1990) Generalized Additive Models Chapman and Hall
Silverman B W (1990) Density Estimation Chapman and Hall
Tukey J W (1977) Exploratory Data Analysis Addison–Wesley
Velleman P F and Hoaglin D C (1981) Applications, Basics, and Computing of Exploratory Data Analysis Duxbury Press, Boston, MA