It's a provocative claim, but this article from Sunday's New York Times Magazine examines the practice of using VaR (Value at Risk) as a univariate risk measure of financial instruments and argues that it may have played a significant role in the recent economic turmoil.
For those not familiar with the terminology, VaR is simply a quantile of a profit-and-loss distribution. If we represent the profit/loss of a financial instrument over the next week as the random variable X, then the 99% weekly VaR is simply the lower 1% quantile of the distribution of X. Because X is represented as a dollar figure, we might conversationally refer to the 99% weekly VaR like this: "we expect with 99% probability that maximum loss in the next week will be less than $47.5 million." Of course, as the article explains, this colloquial statement hides a number of issues.
The most obvious one is that we never actually
know the distribution of X: it must be modeled, and (
pace Box) all models are wrong. Whether the VaR models were wrong but useful depends on the statistician (or in the financial parlance, quant) building the model and the data used to support it. Much of the article discusses the ways in which the models themselves may have been lacking, most notably by using recent historical data which either covered only recent decades without significant market downturns, or in the case of very new instruments where little data were available, only periods during the most recent economic bubble. (This
related article by Noah Millman focuses more in the ratings side, and gives a darkly hilarious example of how a ratings agency used 3 years of data to turn a derivative based on BBB-rated instruments into an investment-grade AAA product.)
Quants interviewed in the NYT article responded quite reasonably: VaR is a tool, and while understanding the limitations, it's a useful one. It's understood that a 99% VaR says nothing about the magnitude of the risk in the 1% tail, but it's nonetheless a useful tool for managing risk 99% of the time, when the market is behaving "normally". But presenting VaR as a deceptively easy-to-understand single dollar figure has a profound psychological effect on non-quants, who are wont to interpret it as a bound on potential losses. One of the most insightful lines in the article to me was this:
There was everyone, really, who, over time, forgot that the VaR number was only meant to describe what happened 99 percent of the time. That $50 million wasn’t just the most you could lose 99 percent of the time. It was the least you could lose 1 percent of the time.
But even then, doesn't this understate the risk? $50M is the minimum loss at the 1% quantile only when the model is correct. As we now know, the models weren't correct, and losses at many firms were much larger than anticipated. There's a surprising example in the article about how in one case extreme losses led one firm to place even more reliance on the model:
Indeed, so sure were the firm’s partners that the market would revert to “normal” — which is what their model insisted would happen — that they continued to take on exposures that would destroy the firm as the crisis worsened.
That firm, famously, was LCTM before its collapse in 1998, but one wonders if similar examples haven't occurred during the most recent crisis.
So, is quantitative analysis a true cause of this crisis? The article appears to lean towards yes (leaning heavily on the comments of
Nassim Nicholas Taleb) but also gives several examples -- most notably from Goldman Sachs -- of managers who saw changes in VaR as a signal that something was changing in the underlying structure of the markets, rendering the models themselves invalid. In other words, the failures here are more human than quantitative: not just in employing VaR sensibly, but also in the failure of ratings and regulatory agencies (see this recent NYT op-ed for a great discussion on that topic), and even wilful deception and fraud (from "stuffing risk into the tails" to examples like Madoff). I think this quote from Gregg Bermann at RiskMetrics sums it up well:
“A computer does not do risk modeling. People do it. And people got overzealous and they stopped being careful. They took on too much leverage. And whether they had models that missed that, or they weren’t paying enough attention, I don’t know. But I do think that this was much more a failure of management than of risk management. I think blaming models for this would be very unfortunate because you are placing blame on a mathematical equation. You can’t blame math.”
I can't understand why one wouldn't compute an annual or biannual VaR - this would reveal all the problems of a chance event even at 99% that might not matter for fast technical and speculative transactions.
But agreed, a bad worker blames his tool.
Posted by: Aleks | January 06, 2009 at 14:35
I guess it depends on exactly the model used as the basis of the VaR calculation, but wouldn't an longer-term VaR essentially just scale the profit/loss distribution? If you're using VaR to truly represent a dollar value then an annual VaR might make sense (even if presenting VaR as a dollar value to non-quants hides all of the unknown long-tail risks) but if you're tracking VaR over time and looking for extremes (akin to a six-sigma process-control chart) then the time-scale shouldn't matter as much, right?
Posted by: David Smith | January 06, 2009 at 14:51
A longer-term VaR would definitely scale the variance more than linearly in most models I can imagine to be valid. The longer the period, the greater chance of a meteorite hitting NYSE, and the greater chance of any other type of disruption.
Even when you use VaR, you can still look at the expectation of the top 1% of highest risk, and make sure it doesn't cause you a default.
The real problem is in the reductionism of badly overfit models, not in the tails. Heavy tails merely cause fear, they don't really tell you what to do.
Posted by: Aleks | January 06, 2009 at 16:53