(Contributing blogger Joseph Rickert reports from the Stanford University Statistics Seminar series - ed.)
Stanford University is very gracious about letting the general public attend many university events. Yesterday, it caught my eye that Bradley Efron was going to speak on Bayesian inference and the parametric bootstrap at the weekly Statistics seminar. So, since the free shuttle that goes to the Stanford quad practically stops at Revolution's front door, I got my self down there to find a standing room only crowd of Stanford faculty and students. Rob Tibshirani, a student of Efron's, did his best to give Efron a hard time in a humorous introduction, but he didn't stand a chance against Efron's quick, dry wit.
Exploring the relationship between Frequentist and Bayesian thinking has been one of Efron's lifelong grand themes. In this talk, he used an early paper of Fisher's and an under appreciated paper from Newton and Raftery to show how importance sampling is a computationally efficient alternative to MCMC for certain classes of problems and to explore the link between Jeffrey's priors and frequentist estimates. Efron's presentation was a masterpiece. His talk was tight, meticulously prepared and delivered with an effortless grace that facilitated the illusion that even the most dense among us could follow the details. It was like having the company of a master painter on a leisurely Sunday visit to the museum: here expounding theory and there telling an anecdote about the painter, or discussing some fine point of technique.
One goal of this talk was to demonstrate how one could go about estimating the frequentist properties of Bayesian estimates. Towards the conclusion, Efron remarked that if you have a real prior, even if its only in your head, then your analysis stands on its own, but if you are going to use an uninformative prior then you ought to check your results with frequentist methods.
For the R enthusiasts in the crowd a small surprise came on slide 22. When Efron got to the first line of this slide he paused to remark on the mixed notation, and pointed out that two of the inventors of the new notation were in attendance (Chambers and Hastie). I have been saying for some time now that the R language facilitates statistical thought. Now, I have some evidence.
Quite obviously, the slides of Brad Efron's talk are most interesting, both for the links with frequentist statistics and for this tantalising slide 22. On the Newton & Raftery paper, the "under-appreciation" [at least in terms of Section 7 of the paper) is easily explained through Radford Neal's blog entry (the worst Monte Carlo method ever) because of the strong potential for infinite variance in the approximation to the marginal likelihood.
Posted by: Xi'an | October 07, 2011 at 08:52