Collective wisdom about the location of Ground Zero for the credit crisis seems to be coalescing around AIG's Financial Products Unit (AIGFP), which basically invented the credit default swaps at the center of the whole mess. TPMmuckraker provides an excellent history of AIGFP: its inception, rise, and eventual downfall (taking AIG and the economy with it).

I want to focus on one small nugget from that history, under the heading "The Seed Of Ruin Is Planted". It documents the very first credit default swap deal AIGFP made, in 1998:

JP Morgan approached AIG, proposing that, for a fee, AIG insure JP Morgan's complex corporate debt, in case of default. According to computer models devised by Gary Gorton, a Yale Business Professor and consultant to the unit, there was a 99.85 percent chance that AIGFP would never have to pay out on these deals. Essentially, this would happen only if the economy went into a full-blown depression.

(Emphasis mine.) I'd guess that 0.15% under-estimated the risk that AIG would have to pay out for these deals, possibly by a significant margin (but hey, that's easy to claim that with the hindsight we now have). That 99.85% figure is almost certainly an estimate of probability given that the assumptions of the model are upheld. Now, I don't know anything about the underlying model per se, but I'm willing to bet that the data used to estimate it didn't go back at least 70 years, to cover the period of the last full-blown (aka Great) depression. Credit Default Swaps are complex instruments, and it's likely that much of the data needed to build and estimate the model simply doesn't exist with more than a 10-20 year history at best. So it may be fair to say that there's a 0.15% chance of failure if a series of fairly unusual things happen, and by "unusual" I mean in the context of the last 20 years. But if a full-blown depression does occur, the model is no longer valid, because it's never seen data relevant to a depression occurring. In that case, when it comes to estimates of risk and probability, all bets are off.

But it is not the problem of the model, rather the problem of those used the model results. The model provides a probability of paying out these deals, the manager should calculate the expected profit of such deals: 0.9985premium - 0.0015payout. The expected profit would be negative if the payout is large enough. Those who made these deal should know what is the premium and what is the payout, and the professor should know how to use the number like 99.85%, they teach this in introductory statistical decision theory.

Posted by: SQ | March 20, 2009 at 14:18

I agree with your statement that problem is how the model is used, especially once it leaves the quant's desk. My point is that any model is only as good as its assumptions, and while the quant implicitly understands Pr(Event)=0.0015 *given the model assumptions are upheld*, that caveat isn't repeated (or understood) as the model predictions are reported. And therein lies the problem: in this case, the entire underlying data regime changed, rendering that probability assessment meaningless.

Posted by: David Smith | March 20, 2009 at 14:28

But "all models are wrong" (Box, 1976). Does that mean all predictions are useless? No, if we present our uncertainty properly, a wrong model can be useful in helping us making informed decisions. In this case, I guess that Gorton summarized his uncertainty in the probability of having a full-blown depression. Hindsight is 20/20. How many in the mainstream seriously questioned the assumption that a full-blown depression is unlikely just one year ago?

Posted by: SQ | March 26, 2009 at 09:59