« How a highly-variable point estimate becomes a headline | Main | Physicists, models, and the credit crisis, ctd. »

March 20, 2009


Feed You can follow this conversation by subscribing to the comment feed for this post.

But it is not the problem of the model, rather the problem of those used the model results. The model provides a probability of paying out these deals, the manager should calculate the expected profit of such deals: 0.9985premium - 0.0015payout. The expected profit would be negative if the payout is large enough. Those who made these deal should know what is the premium and what is the payout, and the professor should know how to use the number like 99.85%, they teach this in introductory statistical decision theory.

I agree with your statement that problem is how the model is used, especially once it leaves the quant's desk. My point is that any model is only as good as its assumptions, and while the quant implicitly understands Pr(Event)=0.0015 *given the model assumptions are upheld*, that caveat isn't repeated (or understood) as the model predictions are reported. And therein lies the problem: in this case, the entire underlying data regime changed, rendering that probability assessment meaningless.

But "all models are wrong" (Box, 1976). Does that mean all predictions are useless? No, if we present our uncertainty properly, a wrong model can be useful in helping us making informed decisions. In this case, I guess that Gorton summarized his uncertainty in the probability of having a full-blown depression. Hindsight is 20/20. How many in the mainstream seriously questioned the assumption that a full-blown depression is unlikely just one year ago?

The comments to this entry are closed.

Search Revolutions Blog

Got comments or suggestions for the blog editor?
Email David Smith.
Follow revodavid on Twitter Follow David on Twitter: @revodavid
Get this blog via email with Blogtrottr