by Joseph Rickert

I recently wrote about some R resources that are available for generalized linear models (GLMs). Looking over the material, I was amazed by the amount of effort that is continuing to go into GLMs, both with with respect to new theoretical developments and also in response to practical problems such as the need to deal with very large data sets. (See packages biglm, ff, ffbase, RevoScaleR for example.) This led me to wonder about the history of the GLM and its implementations. An adequate exploration of this topic would occupy a serious science historian (which I am definitely not) for a considerable amount of time. However, I think even a brief look at what apears to be the main line of the development of the GLM in R provides some insight into how good software influences statistical practice.

A convenient place to start is with the 1972 paper Generalized Linear Models by Nelder and Wedderburn This seems to be the first paper to give the GLM a life of its own. The authors pulled things together by:

- grouping the Normal, Poisson, Binomial (probit) and gamma distributions together as members of the exponential family
- applying maximum likelihood estimation via the iteratively reweighted least squares algorithm to the family
- introducing the terminology “generalized linear models”
- suggesting that this unification would be a pedagogic improvement that would “simplify the teaching of the subject to both specialists and non-specialists”

It is clear that the GLM was not “invented” in 1972. But, Nelder and Wedderburn were able to package up statistical knowledge and a tradition of analysis going pretty far back in a way that will forever shape how statisticians think about generalizations of linear models. For a brief, but fairly detailed account of the history of the major developments in the in categorical data analysis, logistic regression and loglinear models in the early 20th century leading up to the GLM see Chapter 10 of Agresti 1996. (One very interesting fact highlighted by Agresti is that the iteratively reweighted least squares algorithm that Nelder and Weddergurn used to fit GLMs is the method that R.A. Fisher introduced in 1935 to for fitting probit models by means of maximum likelihood.)

The first generally available software to implement a wide range of GLMs seems to have been the Fortran based GLIM system which was developed by the Royal Statistical Society’s Working Party on Statistical Computing, released in 1974 and developed through 1993. My guess is that GLIM dominated the field for nearly 20 years until it was eclipsed by the growing popularity of the 1991 version of S, and the introduction of PROC GENMOD in version 6.09 of SAS that was released in the 1993 timeframe. (Note that the first edition of the manual for the MatLab Statistics Toolbox also dates from 1993.)

In any event, in the 1980s, the GLM became the “go to” statistical tool that it is today. In the chapter on *Generalized Linear Models* that they contributed to Chambers and Hastie’s landmark 1992 book, Hastie and Pregibon write that “GLMS have become popular over the past 10 years, partly due to the computer package GLIM …” It is dangerous temptation to attribute more to a quotation like this than the authors intended. Nevertheless, I think it does offer some support for the idea that in a field such as statistics, theory shapes the tools and then the shape of the tools exerts some influence on how the theory develops.

R’s glm() function was, of course, modeled on the S implementation, The stats package documentation states:

The original R implementation of glm was written by Simon Davies working for Ross Ihaka at the University of Auckland, but has since been extensively re-written by members of the R Core team.The design was inspired by the S function of the same name described in Hastie & Pregibon (1992).

I take this to mean that the R implementation of glm() was much more than just a direct port of the S code. glm() has come a long way. It is very likely that only the SAS PROC GENMOD implementation of the GLM has matched R’s glm()in popularity over the past decade. However, SAS’s closed environment has failed to match open-source R’s ability to foster growth and stimulate creativity. The performance, stability and rock solid reliability of glm() has contributed to making GLMs a basic tool both for statisticians and for the new generation of data scientists as well.

How GLM implementations will develop outside of R in the future is not clear at all. Python’s evolving glm implementation appears to be in the GLIM tradition. (The Python documentation references the paper by Green (1984) which, in-turn, references GLIM.) Going back to first principles is always a good idea, however Python's GLM function apparently only supports one parameter exponential families. The Python developers have a long way to go before they can match R's rich functionality.The Julia glm function is clearly being modeled after R and shows much promise. However, recent threads on the julia-stats google group forum indicate that the Julia developers are just now beginning to work on basic glm() functionality.

**References**

Agresti, Alan, *An Introduction to Categorical Data Analysis*: John Wiley and Sons (1996)

Chambers, John M. and Trevor J. Hastie (ed.), *Statistical Models In S*: Wadsworth & Brooks /Cole (1992)

Green, P.J., *Iteratively reweighted least squares for maximum likelihood estimation, and some robust and resistant alternatives*: Journal of the Royal Statistical Society, Series (1984)

McCullagh, P. and J. A. Nelder. *Generalized Linear Models*: Chapman & Hall (1990)

Nelder, J.A and R.W.M. Wedderburn, *Generalized Linear Models*: K. R. Statist Soc A (1972), 135, part 3, p. 370

This is a very interesting article, I am taking a categorical data class now, and use glm() a lot.Thacks for the great work in glm()!

Maybe Julia's developer should spend their time in R system to add feature, instead of creating a entirely new language,

Posted by: Wolfram | May 18, 2014 at 08:04