Part 2 of a series
by Daniel Hanson, with contributions by Steve Su (author of the GLDEX package)
In our previous article, we introduced the fourparameter Generalized Lambda Distribution (GLD) and looked at fitting a 20year set of returns from the Wilshire 5000 Index, comparing the results of two methods, namely the Method of Moments, and the Method of Maximum Likelihood.
Errata: One very important omission in Part 1, however, was not putting
require(GLDEX)
prior to the examples shown. Many thanks to a reader who pointed this out in the comments section last time.
Let’s also recall the code we used for obtaining returns from the Wilshire 5000 index, and the first four moments of the data (details are in Part 1):
require(quantmod) # quantmod package must be installed
getSymbols("VTSMX", from = "19940901")
VTSMX.Close < VTSMX[,4] # Closing prices
VTSMX.vector < as.vector(VTSMX.Close)
# Calculate log returns
Wilsh5000 < diff(log(VTSMX.vector), lag = 1)
Wilsh5000 < 100 * Wilsh5000[1] # Remove the NA in the first position,
# and put in percent format
# Moments of Wilshire 5000 market returns:
fun.moments.r(Wilsh5000,normalise="Y") # normalise="Y"  subtracts the 3
# from the normal dist value.
# Results:
# mean variance skewness kurtosis
# 0.02824676 1.50214916 0.30413445 7.82107430
Finally, in Part 1, we looked at two methods for fitting a GLD to this data, namely the Method of Moments (MM), and the Method of Maximum Likelihood (ML). We found that MM gave us a near perfect match in mean, variance, skewness, and kurtosis, but goodness of fit measures showed that we could not conclude that the market data was drawn from the fitted distribution. On the other hand, ML gave us a much better fit, but it came at the price of skewness being way off compared to that of the data, and kurtosis not being determined by the fitting algorithm (NA).
Method of LMoments (LM)
Steve Su, in his contributions to this article series, suggested the option of a “third way”, namely the Method of LMoments. Also, as mentioned in the paper Lmoments and TLmoments of the generalized lambda distribution, (William H. Asquith, 2006),
“The method of Lmoments is an alternative technique, which is suitable and popular for heavytailed distributions. The method of Lmoments is particularly useful for distributions, such as the generalized lambda distribution (GLD), that are only expressible in inverse or quantile function form.”
Additional details on the method and algorithm for computing it can be found in this paper, noted above.
As we will see in the example that follows, the result is essentially a compromise between our first two results, but the goodness of fit is still far preferable to that of the Method of Moments.
We follow the same approach as above, but using the GLDEX function fun.RMFMKL.lm(.) to calculate the fitted distribution:
# Fit the LM distribution:
require(GLDEX) # Remembered it this time!
wshLambdaLM = fun.RMFMKL.lm(Wilsh5000)
# Compute the associated moments
fun.theo.mv.gld(wshLambdaLM[1], wshLambdaLM[2], wshLambdaLM[3], wshLambdaLM[4],
param = "fmkl", normalise="Y")
# The results are:
# mean variance skewness kurtosis
# 0.02824678 1.56947022 1.32265715 291.58852044
As was the case with the maximum likelihood fit, the mean and variance are reasonably close, but the skewness and kurtosis do not match the empirical data. However, the skew is not as far off as in the ML case, and we are at least able to calculate a kurtosis value.
Looking at our goodness of fit tests based on the KS statistic:
fun.diag.ks.g(result = wshLambdaLM, data = Wilsh5000, no.test = 1000,
param = "fmkl")
# Result: 740/1000
ks.gof(Wilsh5000,"pgl", wshLambdaLM, param="fmkl")
# D = 0.0201, pvalue = 0.03383
In the first case, our result of 740/1000 suggests a much better fit than the Method of Moments (53/1000) in Part 1, while falling slightly from the ratio we obtained with the Method of Maximum Likelihood. In the second test, a pvalue of 0.03383 is not overwhelmingly convincing, but technically it does allow us to accept the hypothesis that our data is drawn from the same distribution at an α = 0.025 or 0.01 confidence level.
Perhaps more interesting is just looking at the plot
bins < nclass.FD(Wilsh5000) # We get 158 (FreedmanDiaconis Rule)
fun.plot.fit(fit.obj = wshLambdaLM, data = Wilsh5000, nclass = bins, param = "fmkl",
xlab = "Returns", main = "Method of LMoments")
Again, compared to the result for the Method of Moments in Part 1, the plot suggests that we have a better fit.
The QQ plot, however, is not much different than what we got in the Maximum Likelihood case; in particular, losses in the left tail are not underestimated by the fitted distribution as they are in the MM case:
qqplot.gld(fit=wshLambdaLM,data=Wilsh5000,param="fkml", type = "str.qqplot",
main = "Method of LMoments")
Which Option is “Best”?
Steve Su points out that there is no one “best” solution, as there are tradeoffs and competing constraints involved in the algorithms, and it is one reason why, in addition to the methods described above, so many different functions are available in the GLDEX package. On one point, however, there is general agreement in the literature, and that is the Method of Moments  even with the appeal of matching the moments of the empirical data  is an inferior method to others that result in a better fit. This is also discussed in the paper by Asquith, namely, that the method of moments “generally works well for lighttailed distributions. For heavytailed distributions, however, use of the method of moments can be questioned.”
Comparison with the Normal Distribution
For a strawman comparison, we can fit the Wilshire 5000 returns to a normal distribution in R, and run the KS test as follows:
f < fitdistr(Wilsh5000, densfun = "normal")
ks.test(Wilsh5000, "pnorm", f$estimate[1], f$estimate[2], alternative = "two.sided")
The results are as follows:
# Onesample KolmogorovSmirnov test
# data: Wilsh5000
# D = D = 0.0841, pvalue < 2.2e16
# alternative hypothesis: twosided
With a pvalue that small, we can firmly reject the returns data as being drawn from a fitted normal distribution.
We can also get a look at the plot of the implied normal distribution overlaid upon the fit we obtained with the method of Lmoments, as follows:
x < seq(min(Wilsh5000), max(Wilsh5000), bins)
# Chop the domain into bins = 158nintervals to get sample points
# from the approximated normal distribution (FreedmanDiaconis)
fun.plot.fit(fit.obj = wshLambdaLM, data = Wilsh5000, nclass = bins,
param = "fmkl", xlab = "Returns")
curve(dnorm(x, mean=f$estimate[1], sd=f$estimate[2]), add=TRUE,
col = "red", lwd = 2) # Normal curve in red
Although it may be a little difficult to see, note that between 3 and 4 on the horizontal axis, the tail of the normal fit (in red) falls below that of the GLD (in blue), and it is along this left tail where extreme events can occur in the markets. The normal distribution implies a lower probability of these “black swan” events than the more representative GLD.
This is further confirmed by looking at the QQ plot vs a normal distribution fit. Note how the theoretical fit (horizontal axis in this case, using the Base R function qqnorm(.); ie, the axes are switched compared to those in our previous QQ plots) vastly underestimates losses in the left tail.
qqnorm(Wilsh5000, main = "Normal QQ Plot")
qqline(Wilsh5000)
In summary, from these plots, we can see that the GLD fit, particularly using ML or LM, is a superior alternative to what we get with the normal distribution fit when estimating potential index losses.
Conclusion
We have seen, using R and the GLDEX package, how a four parameter distribution such as the Generalized Lambda Distribution can be used to fit a more realistic distribution to market data as compared to the normal distribution, particularly considering the fat tails typically present in returns data that cannot be captured by a normal distribution. While the Method of Moments as a fitting algorithm is highly appealing due to its preserving the moments of the empirical distribution, we sacrifice goodness of fit that can be obtained using other methods such as Maximum Likelihood, and LMoments.
The GLD has been demonstrated in financial texts and research literature as a suitable distributional fit for determining market risk measures such as Value at Risk (VaR), Expected Shortfall (ES), and other metrics. We will look at examples in an upcoming article.
Again, very special thanks are due to Dr Steve Su for his contributions and guidance in presenting this topic.
by Daniel Hanson, with contributions by Steve Su (author of the GLDEX package). Part 1 of a series.
As most readers are well aware, market return data tends to have heavier tails than that which can be captured by a normal distribution; furthermore, skewness will not be captured either. For this reason, a four parameter distribution such as the Generalized Lambda Distribution (GLD) can give us a more realistic representation of the behavior of market returns, including a more accurate measure of expected loss in risk management applications as compared to the normal distribution.
This is not to say that the normal distribution should be thrown in the dustbin, as the underlying stochastic calculus, based on Brownian Motion, remains a very convenient tool in modeling derivatives pricing and risk exposures (see earlier blog article here), but like all modeling methods, it has its strengths and weaknesses.
As noted in the book Financial Risk Modelling and Portfolio Optimization with R (Pfaff, Ch 6: Suitable distributions for returns) (publisher information provided here), the GLD is one of the recommended distributions to consider in order “to model not just the tail behavior of the losses, but the entire return distribution. This need arises when, for example, returns have to be sampled for Monte Carlo type applications.” The author provides descriptions and examples of several R packages freely available on the CRAN website, namely Davies, fBasics, gld, and lmomco. Another package, also freely available on CRAN, is the GLDEX package, which is the package we will use in the current article. It contains a rich offering of functions and is well documented. In addition, the author of the GLDEX package, Dr Steve Su, has kindly provided assistance in the writing of this article. He has also published a very useful and related article in the Journal of Statistical Software (JSS) (2007), to which we will refer in the discussion below.
The four parameters of the GLD are, not surprisingly, λ1, λ2, λ3, and λ4. Without going into theoretical details, suffice it to say that λ1 and λ2 are measures of location and scale respectively, while the skewness and kurtosis of the distribution are determined by λ3 and λ4.
Furthermore, there are two forms of the GLD that are implemented in GLDEX, namely those of Ramberg and Schmeiser (1974), and Freimer, Mudholkar, Kollia, and Lin (1988). These are commonly abbreviated as RS and FMKL. As the FMKL form is the more modern of the two, we will focus on it in the discussion that follows. An additional reference frequently cited in the literature related to the GLD in finance is the paper by Chalabi, Scott, and Wurtz, freely available here on the rmetrics website.
As Steve Su points out in his 2007 JSS article on the GLDEX package (see link above), there are three basic steps that are useful in determining the quality of the GLD fit. The first two, as we shall see, can be competing objectives in determining the fit. The GLDEX package provides functionality for each.
Remark: The list of options has been presented here in opposite order of that in the JSS article in order to assist in the development of the discussion, as we shall see.
Market Returns Data
Let’s first obtain some market data to use. The Wilshire 5000 index is commonly used as a measure of the total US equity market  comprising large, medium, and small cap stocks  so we call once again upon our old friend the quantmod package to access the past 20 years of daily closing prices of the Vanguard Total Stock Market Index Fund (VTSMX).
require(quantmod) # quantmod package must be installed
getSymbols("VTSMX", from = "19940901")
VTSMX.Close < VTSMX[,4] # Closing prices
VTSMX.vector < as.vector(VTSMX.Close)
# Calculate log returns
Wilsh5000 < diff(log(VTSMX.vector), lag = 1)
Wilsh5000 < 100 * Wilsh5000[1] # Remove the NA in the first position,
# and put in percent format
Method of Moments
Appealing to step (1) above, the following function uses the FMKL form to fit the data to a GLD with the method of moments,
wshLambdaMM < fun.RMFMKL.mm(Wilsh5000)
This returns the estimated values of λ1, λ2, λ3, and λ4 in the vector wshLambdaMM :
[1] 0.04882924 1.98442097 0.16423899 0.13470102
Remark: Warning messages such as the following may occur when running this function:
Warning messages:
1: In beta(a, b) : NaNs produced
2: In beta(a, b) : NaNs produced
…
These may be ignored.
We can then compare the four moments of the fitted distribution with those of the market data using the following functions respectively:
# Moments of fitted distribution:
fun.theo.mv.gld(wshLambdaMM[1], wshLambdaMM[2], wshLambdaMM[3], wshLambdaMM[4],
param = "fmkl", normalise="Y")
# Results:
# mean variance skewness kurtosis
# 0.02824672 1.50214919 0.30413445 7.8210743
# Moments of Wilshire 5000 market returns:
fun.moments.r(Wilsh5000,normalise="Y") # normalise="Y"  subtracts the 3
# from the normal dist value.
# Results:
# mean variance skewness kurtosis
# 0.02824676 1.50214916 0.30413445 7.82107430
We’re basically spoton here, and things are looking pretty good; however, we haven’t looked at a goodness of fit test yet, and unfortunately, this will tell a different story. We will first look at the KomogorovSmirnoff (KS) resample test, as shown in the 2007 JSS article. The test is based on the sample statistic KolmogorovSmirnoff Distance (D) between the data in the sample and the fitted distribution. The null hypothesis is, simply speaking, that the sample data is drawn from the same distribution as the fitted distribution.
The function here, from the GLDEX package, samples a proportion (default = 90%) of the data and fitted distribution and calculates the KS test pvalue 1000 times (no.test argument), and returns the number of times that the pvalue is not significant. The higher the number, the more confident we can be that the fitted distribution is reasonable.
fun.diag.ks.g(result = wshLambdaMM, data = Wilsh5000, no.test = 1000,
param = "fmkl")
Our result here is 53/1000, which suggests that we’re pretty way off. A more recent addition to the GLDEX package that was not available at the time the related 2007 JSS article was written is the following:
ks.gof(Wilsh5000, "pgl", wshLambdaMM, param="fmkl")
where pgl is the GLD distribution function included in the GLDEX package (the analog of the pnorm normal distribution function included in Base R).
With a pvalue of 1.912e05, it is pretty safe to reject the hypothesis that the sample data is drawn from the fitted distribution.
Method of Maximum Likelihood (ML)
As Steve Su points out in his JSS article, “The maximum likelihood estimation is usually the preferred method” for “providing definite fits to a data set using the GLD”. The function in the GLDEX package, again for the FKML parameterization, is
wshLambdaML = fun.RMFMKL.ml(Wilsh5000)
Checking our goodness of fit tests,
fun.diag.ks.g(result = wshLambdaML, data = Wilsh5000, no.test = 1000, param = "fmkl")
We get a result of 825/1000, and for
ks.gof(Wilsh5000,"pgl", wshLambdaML, param="fmkl")
we get D = 0.0151, pvalue = 0.2
This pvalue, while not spectacular, is far better than what we saw for the method of moments case, and the KS resample test is also much more convincing. But now, the “bad news”: if we look at the four moments of the ML fit,
fun.theo.mv.gld(wshLambdaML[1], wshLambdaML[2], wshLambdaML[3],
wshLambdaML[4], param = "fmkl", normalise="Y")
we get
# mean variance skewness kurtosis
# 0.02850058 1.64456695 2.13494680 NA
While the mean and variance are reasonably close to their empirical counterparts, skewness is off by about 60%, and kurtosis can’t be determined by the algorithm.
Graphical Comparison of Method of Moments and Maximum Likelihood
Now, invoking step 3, let’s compare the plots resulting from the two different methods, using the fun.plot.fit(.) function provided in the GLDEX package, by overlaying the pdf curve of the fitted distribution on top of the histogram of the returns data. In order to assure a meaningful plot, however, we should first determine the optimal number of bins in the histogram using the FreedmanDiaconis Rule with the following R function:
bins < nclass.FD(Wilsh5000) # We get 158
Then, set nclass = bins into the plotting function in the GLDEX package:
# Method of Moments
fun.plot.fit(fit.obj = wshLambdaMM, data = Wilsh5000, nclass = bins,
param = "fmkl",
xlab = "Returns", main = "Method of Moments Fit")
# Method of Maximum Likelihood
fun.plot.fit(fit.obj = wshLambdaML, data = Wilsh5000,
nclass = bins, param = "fkml",
xlab = "Returns", main = "Method of Maximum Likelihood")
Visual inspection of the plots is consistent with our findings above, that the method of maximum likelihood results in a better fit of the data than the method of moments, despite the fact that the moments line up almost exactly in the case of the former.
One more set of plots that one should inspect is the set of quantile (“QQ”) plots:
qqplot.gld(fit=wshLambdaMM,data=Wilsh5000,param="fkml", type = "str.qqplot",
main = "Method of Moments")
qqplot.gld(fit=wshLambdaML,data=Wilsh5000,param="fkml", type = "str.qqplot",
main = "Method of Maximum Likelihood")
Now, if we were to look at these two plots in a vacuum, so to speak, with none of the other prior information available, there is a good case to be made that the QQ plot for Method of Moments might indicate a better fit. However, note that at about 4 along the horizontal (empirical data) axis, the plotted points start to drift above the line indicating where the horizontal and vertical axis values are equal. This implies that our fit is underestimating market losses as we move out toward the left tail of the distribution. The QQ plot for Maximum Likelihood is more conservative, erring on the side of caution with the fitted distribution indicating an increased risk of greater loss than the Method of Moments fit. As Steve Su puts it, the general recommendation is to look at the QQ plot and KS test results together, to determine the goodness of fit. The QQ plot alone, however, is not a failproof method.
Conclusion
We have seen, using R and the GLDEX package, how a four parameter distribution such as the Generalized Lambda Distribution can be used to fit a distribution to market data. While the Method of Moments as a fitting algorithm is highly appealing due to its preserving the moments of the empirical distribution, we sacrifice goodness of fit that can be obtained by using the Method of Maximum Likelihood.
In our next article, we will look at an alternative GLD fitting method know as the Method of LMoments as a compromise between the two methods discussed here, and then conclude with a comparison with the normal distribution, which will exhibit quite clearly the advantages of the GLD when it comes to fitting financial returns data.
Very special thanks are due to Dr Steve Su for his contributions and guidance in presenting this topic.
by Don Boyd, Senior Fellow, Rockefeller Institute of Government
The Rockefeller Institute of Government is excited to be developing models to simulate the finances of public pension funds, using R.
Public pension funds invest contributions from governments and public sector workers in an effort to ensure that they can pay all promised benefits when due. State and local government pension funds in the United States currently have more than $3 trillion invested, more than $2 trillion of which is in equitylike investments. For example, NYC, has over $158 billion invested. Governments usually act as a backstop: if pension fund investment returns do better than expected, governments will be able to contribute less, but if investment returns fall short they will have to contribute more. When that happens, politicians must raise taxes or cut spending programs. These risks often are not well understood or widely discussed. (For a discussion of many of the most significant issues, see Strengthening the Security of Public Sector Defined Benefit Plans.)
We are building stochastic simulation models in R to help quantify the investment risks and their potential consequences. We are modeling the finances of specific pension plans, taking into account all of the main flows such as current and expected benefit payouts to workers, contributions from governments and from workers, and investment returns, and how they affect liabilities and investible assets. The models will take into account the changing demographics of the workforce and retiree populations. We are modeling investment returns stochastically, examining different return scenarios and different economic environments, as well as different governmental contribution policies. We will use these models to evaluate the risks currently being taken and to help provide policy advice to governments, pension funds, and others. (For a full description of our approach, see Modeling and Disclosing Public Pension Fund Risk, and Consequences for Pension Funding Security)
We have chosen R because:
All programming languages have weaknesses. R’s great flexibility means that it is easy to write illorganized programs that are hard to understand and debug. And poorly written programs that do not take advantage of R’s strengths can be extremely slow. We believe we can compensate for these weaknesses by making our programs modular, using a consistent programming style with appropriate documentation, and by using R features smartly and speedtesting where appropriate.
R analysts and programmers interested in learning about the opportunity to work on this project should examine the programmer/analyst position description and related materials at the Rockefeller Institute’s web site.
The latest in a series by Daniel Hanson
Introduction
Correlations between holdings in a portfolio are of course a key component in financial risk management. Borrowing a tool common in fields such as bioinformatics and genetics, we will look at how to use heat maps in R for visualizing correlations among financial returns, and examine behavior in both a stable and down market.
While base R contains its own heatmap(.) function, the reader will likely find the heatmap.2(.) function in the R package gplots to be a bit more user friendly. A very nicely written companion article entitled A short tutorial for decent heat maps in R (Sebastian Raschka, 2013), which covers more details and features, is available on the web; we will also refer to it in the discussion below.
We will present the topic in the form of an example.
Sample Data
As in previous articles, we will make use of R packages Quandl and xts to acquire and manage our market data. Here, in a simple example, we will use returns from the following global equity indices over the period 19980105 to the present, and then examine correlations between them:
S&P 500 (US)
RUSSELL 2000 (US Small Cap)
NIKKEI (Japan)
HANG SENG (Hong Kong)
DAX (Germany)
CAC (France)
KOSPI (Korea)
First, we gather the index values and convert to returns:
library(xts) library(Quandl) my_start_date < "19980105" SP500.Q < Quandl("YAHOO/INDEX_GSPC", start_date = my_start_date, type = "xts") RUSS2000.Q < Quandl("YAHOO/INDEX_RUT", start_date = my_start_date, type = "xts") NIKKEI.Q < Quandl("NIKKEI/INDEX", start_date = my_start_date, type = "xts") HANG_SENG.Q < Quandl("YAHOO/INDEX_HSI", start_date = my_start_date, type = "xts") DAX.Q < Quandl("YAHOO/INDEX_GDAXI", start_date = my_start_date, type = "xts") CAC.Q < Quandl("YAHOO/INDEX_FCHI", start_date = my_start_date, type = "xts") KOSPI.Q < Quandl("YAHOO/INDEX_KS11", start_date = my_start_date, type = "xts") # Depending on the index, the final price for each day is either # "Adjusted Close" or "Close Price". Extract this single column for each: SP500 < SP500.Q[,"Adjusted Close"] RUSS2000 < RUSS2000.Q[,"Adjusted Close"] DAX < DAX.Q[,"Adjusted Close"] CAC < CAC.Q[,"Adjusted Close"] KOSPI < KOSPI.Q[,"Adjusted Close"] NIKKEI < NIKKEI.Q[,"Close Price"] HANG_SENG < HANG_SENG.Q[,"Adjusted Close"] # The xts merge(.) function will only accept two series at a time. # We can, however, merge multiple columns by downcasting to *zoo* objects. # Remark: "all = FALSE" uses an inner join to merge the data. z < merge(as.zoo(SP500), as.zoo(RUSS2000), as.zoo(DAX), as.zoo(CAC), as.zoo(KOSPI), as.zoo(NIKKEI), as.zoo(HANG_SENG), all = FALSE) # Set the column names; these will be used in the heat maps: myColnames < c("SP500","RUSS2000","DAX","CAC","KOSPI","NIKKEI","HANG_SENG") colnames(z) < myColnames # Cast back to an xts object: mktPrices < as.xts(z) # Next, calculate log returns: mktRtns < diff(log(mktPrices), lag = 1) head(mktRtns) mktRtns < mktRtns[1, ] # Remove resulting NA in the 1st row
Generate Heat Maps
As noted above, heatmap.2(.) is the function in the gplots package that we will use. For convenience, we’ll wrap this function inside our own generate_heat_map(.) function, as we will call this parameterization several times to compare market conditions.
As for the parameterization, the comments should be selfexplanatory, but we’re keeping things simple by eliminating the dendogram, and leaving out the trace lines inside the heat map and density plot inside the color legend. Note also the setting Rowv = FALSE, this ensures the ordering of the rows and columns remains consistent from plot to plot. We’re also just using the default color settings; for customized colors, see the Raschka tutorial linked above.
require(gplots) generate_heat_map < function(correlationMatrix, title) { heatmap.2(x = correlationMatrix, # the correlation matrix input cellnote = correlationMatrix # places correlation value in each cell main = title, # heat map title symm = TRUE, # configure diagram as standard correlation matrix dendrogram="none", # do not draw a row dendrogram Rowv = FALSE, # keep ordering consistent trace="none", # turns off trace lines inside the heat map density.info="none", # turns off density plot inside color legend notecol="black") # set font color of cell labels to black }
Next, let’s calculate three correlation matrices using the data we have obtained:
Now, let’s call our heat map function using the total market data set:
generate_heat_map(corr1, "Correlations of World Market Returns, Jan 1998  Present")
And then, examine the result:
As expected, we trivially have correlations of 100% down the main diagonal. Note that, as shown in the color key, the darker the color, the lower the correlation. By design, using the parameters of the heatmap.2(.) function, we set the title with the main = title parameter setting, and the correlations shown in black by using the notecol="black" setting.
Next, let’s look at a period of relative calm in the markets, namely the year 2004:
generate_heat_map(corr2, "Correlations of World Market Returns, Jan  Dec 2004")
This gives us:
generate_heat_map(corr2, "Correlations of World Market Returns, Jan  Dec 2004")
Note that in this case, at a glance of the darker colors in each of the cells, we can see that we have even lower correlations than those from our entire data set. This may of course be verified by comparing the numerical values.
Finally, let’s look at the opposite extreme, during the upheaval of the financial crisis in 20082009:
generate_heat_map(corr3, "Correlations of World Market Returns, Oct 2008  May 2009")
This yields the following heat map:
Note that in this case, again just at first glance, we can tell the correlations have increased compared to 2004, by the colors changing from dark to light nearly across the board. While there are some correlations that do not increase all that much, such as the SP500/Nikkei and the Russell 2000/Kospi values, there are others across international and capitalization categories that jump quite significantly, such as the SP500/Hang Seng correlation going from about 21% to 41%, and that of the Russell 2000/DAX moving from 43% to over 57%. So, in other words, portfolio diversification can take a hit in down markets.
Conclusion
In this example, we only looked at seven market indices, but for a closer look at how correlations were affected during 200809  and how heat maps among a greater number of market sectors compared  this article, entitled Diversification is Broken, is a recommended and interesting read.
by Joseph Rickert
If I had to pick just one application to be the “killer app” for the digital computer I would probably choose Agent Based Modeling (ABM). Imagine creating a world populated with hundreds, or even thousands of agents, interacting with each other and with the environment according to their own simple rules. What kinds of patterns and behaviors would emerge if you just let the simulation run? Could you guess a set of rules that would mimic some part of the real world? This dream is probably much older than the digital computer, but according to Jan Thiele’s brief account of the history of ABMs that begins his recent paper, R Marries NetLogo: Introduction to the RNetLogo Package in the Journal of Statistical Software, academic work with ABMs didn’t really take off until the late 1990s.
Now, people are using ABMs for serious studies in economics, sociology, ecology, sociopsychology, anthropology, marketing and many other fields. No less of a complexity scientist than Doyne Farmer (of Dynamic Systems and Prediction Company fame) has argued in Nature for using ABMs to model the complexity of the US economy, and has published on using ABMs to drive investment models. in the following clip of a 2006 interview, Doyne talks about building ABMs to explain the role of subprime mortgages on the Housing Crisis. (Note that when asked about how one would calibrate such a model Doyne explains the need to collect massive amounts of data on individuals.)
Fortunately, the tools for building ABMs seem to be keeping pace with the ambition of the modelers. There are now dozens of platforms for building ABMs, and it is somewhat surprising that NetLogo, a tool with some whimsical terminology (e.g. agents are called turtles) that was designed for teaching children, has apparently become a defacto standard. NetLogo is Java based, has an intuitive GUI, ships with dozens of useful sample models, is easy to program, and is available under the GPL 2 license.
As you might expect, R is a perfect complement for NetLogo. Doing serious simulation work requires a considerable amount of statistics for calibrating models, designing experiments, performing sensitivity analyses, reducing data, exploring the results of simulation runs and much more. The recent JASS paper Facilitating Parameter Estimation and Sensitivity Analysis of AgentBased Models: a Cookbook Using NetLogo and R by Thiele and his collaborators describe the R / NetLogo relationship in great detail and points to a decade’s worth of reading. But the real fun is that Thiele’s RNetLogo package lets you jump in and start analyzing NetLogo models in a matter of minutes.
Here is part of an extended example from Thiele's JSS paper that shows R interacting with the Fire model that ships with NetLogo. Using some very simple logic, Fire models the progress of a forest fire.
Snippet of NetLogo Code that drives the Fire model
to go if not any? turtles ;; either fires or embers [ stop ] ask fires [ ask neighbors4 with [pcolor = green] [ ignite ] set breed embers ] fadeembers tick end ;; creates the fire turtles to ignite ;; patch procedure sproutfires 1 [ set color red ] set pcolor black set burnedtrees burnedtrees + 1 end
The general idea is that turtles represent the frontier of the fire run through a grid of randomly placed trees. Not shown in the above snippet is the logic that shows that the entire model is controlled by a single parameter representing the density of the trees.
This next bit of R code shows how to launch the Fire model from R, set the density parameter, and run the model.
# Launch RNetLogo and control an initial run of the # NetLogo Fire Model library(RNetLogo) nlDir < "C:/Program Files (x86)/NetLogo 5.0.5" setwd(nlDir) nl.path < getwd() NLStart(nl.path) model.path < file.path("models", "Sample Models", "Earth Science","Fire.nlogo") NLLoadModel(file.path(nl.path, model.path)) NLCommand("set density 70") # set density value NLCommand("setup") # call the setup routine NLCommand("go") # launch the model from R
Here we see the Fire model running in the NetLogo GUI after it was launched from RStudio.
This next bit of code tracks the progression of the fire as a function of time (model "ticks"), returns results to R and plots them. The plot shows the nonlinear behavior of the system.
# Investigate percentage of forest burned as simulation proceeds and plot library(ggplot2) NLCommand("set density 60") NLCommand("setup") burned < NLDoReportWhile("any? turtles", "go", c("ticks", "(burnedtrees / initialtrees) * 100"), as.data.frame = TRUE, df.col.names = c("tick", "percent.burned")) # Plot with ggplot2 p < ggplot(burned,aes(x=tick,y=percent.burned)) p + geom_line() + ggtitle("Nonlinear forest fire progression with density = 60")
As with many dynamical systems, the Fire model displays a phase transition. Setting the density lower than 55 will not result in the complete destruction of the forest, while setting density above 75 will very likely result in complete destruction. The following plot shows this behavior.
RNetLogo makes it very easy to programatically run multiple simulations and capture the results for analysis in R. The following two lines of code runs the Fire model twenty times for each value of density between 55 and 65, the region surrounding the pahse transition.
d < seq(55, 65, 1) # vector of densities to examine res < rep.sim(d, 20) # Run the simulation
The plot below shows the variability of the percent of trees burned as a function of density in the transition region.
My code to generate plots is available in the file: Download NelLogo_blog while all of the code from Thiele's JSS paper is available from the journal website.
Finally, here are a few more interesting links related to ABMs.
by Daniel Hanson
Recap and Introduction
Last time in part 1 of this topic, we used the xts and lubridate packages to interpolate a zero rate for every date over the span of 30 years of market yield curve data. In this article, we will look at how we can implement the two essential functions of a term structure: the forward interest rate, and the forward discount factor.
Definitions and Notation
We will apply a mix of notation adopted in the lecture notes Interest Rate Models: Introduction, pp 34, from the New York University Courant Institute (2005), along with chapter 1 of the book Interest Rate Models — Theory and Practice (2nd edition, Brigo and Mercurio, 2006). A presentation by Damiano Brigo from 2007, which covers some of the essential background found in the book, is available here, from the Columbia University website.
First, t ≧ 0 and T ≧ 0 represent time values in years.
P(t, T) represents the forward discount factor at time t ≦ T, where T ≦ 30 years (in our case), as seen at time = 0 (ie, our anchor date). In other words, again in US Dollar parlance, this means the value at time t of one dollar to be received at time T, based on continuously compounded interest. Note then that, trivially, we must have P(T, T) = 1.
R(t, T) represents the continuously compounded forward interest rate, as seen at time = 0, paid over the period [t, T]. This is also sometimes written as F(0; t, T) to indicate that this is the forward rate as seen at the anchor date (time = 0), but to keep the notation lighter, we will use R(t, T) as is done in the NYU notes.
We then have the following relationships between P(t, T) and R(t, T), based on the properties of continuously compounded interest:
P(t, T) = exp(R(t, T)・(T  t)) (A)
R(t, T) = log(P(t, T)) / (T  t) (B)
Finally, the interpolated the market yield curve we constructed last time allows us to find the value of R(0, T) for any T ≦ 30. Then, since by properties of the exponential function we have
P(t, T) = P(0, T) / P(0, t) (C)
we can determine any discount factor P(t, T) for 0 ≦ t ≦ T ≦ 30, and therefore any R(t, T), as seen at time = 0.
Converting from Dates to Year Fractions
By now, one might be wondering  when we constructed our interpolated market yield curve, we used actual dates, but here, we’re talking about time in units of years  what’s up with that? The answer is that we need to convert from dates to year fractions. While this may seem like a rather trivial proposition  for example, why not just divide the number of days between the start date and maturity date by 365.25  it turns out that, with financial instruments such as bonds, options, and futures, in practice we need to be much more careful. Each of these comes with a specified day count convention, and if not followed properly, it can result in the loss of millions for a trading desk.
For example, consider the Actual / 365 Fixed day count convention:
Year Fraction (ie, T  t) = (Days between Date1 and Date2) / 365
This is one commonly used convention and is very simple to calculate; however, for certain bond calculations, it can become much more complicated, as leap years are considered, as well as local holidays in the country in which the bond is traded, plus more esoteric conditions that may be imposed. To get an idea, look up day count conventions used for government bonds in various countries.
In the book by Brigo and Mercurio noted above, the authors in fact replace the “T  t” expression with a function (tau) τ(t, T), which represents the difference in time based upon the day count convention in effect.
Equation (A) then becomes
P(t, T) = exp(R(t, T)・ τ(t, T))
where τ(t, T) might be, for example, the Actual / 365 Fixed day count convention.
For the remainder of this article, we will implement to the “T  t” above as a day count function, as demonstrated in the example to follow.
Implementation in R
We will first revisit the example from our previous article on interpolation of market zero rates, and then use this to demonstrate the implementation of term structure functions to calculate forward discount factors and forward interest rates.
a) The setup from part 1
Let’s first go back to the example from part 1 and construct our interpolated 30year market yield curve, using cubic spline interpolation. Both the xts and lubridate packages need to be loaded. The code is republished here for convenience:
require(xts)
require(lubridate)
ad < ymd(20140514, tz = "US/Pacific")
marketDates < c(ad, ad + days(1), ad + weeks(1), ad + months(1),
ad + months(2), ad + months(3), ad + months(6),
ad + months(9), ad + years(1), ad + years(2),
ad + years(3), ad + years(5), ad + years(7),
ad + years(10), ad + years(15), ad + years(20),
ad + years(25), ad + years(30))
# Use substring(.) to get rid of "UTC"/time zone after the dates
marketDates < as.Date(substring(marketDates, 1, 10))
# Convert percentage formats to decimal by multiplying by 0.01:
marketRates < c(0.0, 0.08, 0.125, 0.15, 0.20, 0.255, 0.35, 0.55, 1.65,
2.25, 2.85, 3.10, 3.35, 3.65, 3.95, 4.65, 5.15, 5.85) * 0.01
numRates < length(marketRates)
marketData.xts < as.xts(marketRates, order.by = marketDates)
createEmptyTermStructureXtsLub < function(anchorDate, plusYears)
{
# anchorDate is a lubridate here:
endDate < anchorDate + years(plusYears)
numDays < endDate  anchorDate
# We need to convert anchorDate to a standard R date to use
# the "+ 0:numDays" operation
# Also, note that we need a total of numDays + 1
# in order to capture both end points.
xts.termStruct < xts(rep(NA, numDays + 1),
as.Date(anchorDate) + 0:numDays)
return(xts.termStruct)
}
termStruct < createEmptyTermStructureXtsLub(ad, 30)
for(i in (1:numRates)) termStruct[marketDates[i]] <
marketData.xts[marketDates[i]]
termStruct.spline.interpolate < na.spline(termStruct, method = "hyman")
colnames(termStruct.spline.interpolate) < "ZeroRate"
b) Check the plot
plot(x = termStruct.spline.interpolate[, "ZeroRate"], xlab = "Time",
ylab = "Zero Rate",
main = "Interpolated Market Zero Rates 20140514 
Cubic Spline Interpolation",
ylim = c(0.0, 0.06), major.ticks= "years",
minor.ticks = FALSE, col = "darkblue")
This gives us a reasonably smooth curve, preserving the monotonicity of our data points:
c) Implement functions for discount factors and forward rates
We will now implement these functions, utilizing equations (A), (B), and (C) above. We will also take advantage of the functional programming feature in R, by incorporating the Actual / 365 Fixed day count as a functional argument, as an example. One could of course implement any other day count convention as a function of two lubridate dates, and pass it in as an argument.
First, let’s implement the Actual / 365 Fixed day count as a function:
# Simple example of a day count function: Actual / 365 Fixed
# date1 and date2 are assumed to be lubridate dates, so that we can
# easily carry out the subtraction of two dates.
dayCountFcn_Act365F < function(date1, date2)
{
yearFraction < as.numeric((date2  date1)/365)
return(yearFraction)
}
Next, since the forward rate R(t, T) depends on the forward discount factor P(t, T), let’s implement the latter first:
# date1 and date2 are again assumed to be lubridate dates.
fwdDiscountFactor < function(anchorDate, date1, date2, xtsMarketData, dayCountFunction)
{
# Convert lubridate dates to base R dates in order to use as xts indices.
xtsDate1 < as.Date(date1)
xtsDate2 < as.Date(date2)
if((xtsDate1 > xtsDate2)  xtsDate2 > max(index(xtsMarketData)) 
xtsDate1 < min(index(xtsMarketData)))
{
stop("Error in date order or range")
}
# 1st, get the corresponding market zero rates from our
# interpolated market rate curve:
rate1 < as.numeric(xtsMarketData[xtsDate1]) # R(0, T1)
rate2 < as.numeric(xtsMarketData[xtsDate2]) # R(0, T2)
# P(0, T) = exp(R(0, T) * (T  0)) (A), with t = 0 <=> anchorDate
discFactor1 < exp(rate1 * dayCountFunction(anchorDate, date1))
discFactor2 < exp(rate2 * dayCountFunction(anchorDate, date2))
# P(t, T) = P(0, T) / P(0, t) (C), with t <=> date1 and T <=> date2
fwdDF < discFactor2/discFactor1
return(fwdDF)
}
Finally, we can then write a function to compute the forward interest rate:
# date1 and date2 are assumed to be lubridate dates here as well.
fwdInterestRate < function(anchorDate, date1, date2, xtsMarketData, dayCountFunction)
{
if(date1 == date2) {
fwdRate = 0.0 # the trivial case
} else {
fwdDF < fwdDiscountFactor(anchorDate, date1, date2,
xtsMarketData, dayCountFunction)
# R(t, T) = log(P(t, T)) / (T  t) (B)
fwdRate < log(fwdDF)/dayCountFunction(date1, date2)
}
return(fwdRate)
}
d) Calculate discount factors and forward interest rates
As an example, suppose we want to get the five year forward threemonth discount factor and interest rates:
# Five year forward 3month discount factor and forward rate:
date1 < anchorDate + years(5)
date2 < date1 + months(3)
fwdDiscountFactor(anchorDate, date1, date2, termStruct.spline.interpolate,
dayCountFcn_Act365F)
fwdInterestRate(anchorDate, date1, date2, termStruct.spline.interpolate,
dayCountFcn_Act365F)
# Results are:
# [1] 0.9919104
# [1] 0.03222516
We can also check the trivial case for P(T, T) and R(T, T), where we get 1.0 and 0.0 respectively, as expected:
# Trivial case:
fwdDiscountFactor(anchorDate, date1, date1, termStruct.spline.interpolate,
dayCountFcn_Act365F) # returns 1.0
fwdInterestRate(anchorDate, date1, date1, termStruct.spline.interpolate,
dayCountFcn_Act365F) # returns 0.0
Finally, we can verify that we can recover the market rates at various points along the curve; here, we look at 1Y and 30Y, and can check that we get 0.0165 and 0.0585, respectively:
# Check that we recover market data points:
oneYear < anchorDate + years(1)
thirtyYears < anchorDate + years(30)
fwdInterestRate(anchorDate, anchorDate, oneYear,
termStruct.spline.interpolate,
dayCountFcn_Act365F) # returns 1.65%
fwdInterestRate(anchorDate, anchorDate, thirtyYears,
termStruct.spline.interpolate,
dayCountFcn_Act365F) # returns 5.85%
Concluding Remarks
We have shown how one can implement a term structure of interest rates utilizing tools available in the R packages lubridate and xts. We have, however, limited the example to interpolation within the 30 year range of given market data without discussing extrapolation in cases where forward rates are needed beyond the endpoint. This case does arise in risk management for longer term financial instruments such as variable annuity and life insurance products, for example. One simpleminded  but sometimes used  method is to fix the zero rate that is given at the endpoint for all dates beyond that point. A more sophisticated approach is to use the financial cubic spline method as described in the paper by Adams (2001), cited in part 1 of the current discussion. However, xts unfortunately does not provide this interpolation method for us out of the box. Writing our own implementation might make for an interesting topic for discussion down the road  something to keep in mind. For now, however, we have a working term structure implementation in R that we can use to demonstrate derivatives pricing and risk management models in upcoming articles.
by Ilya Kipnis
In this post, I will demonstrate how to obtain, stitch together, and clean data for backtesting using futures data from Quandl. Quandl was previously introduced in the Revolutions Blog. Functions I will be using can be found in my IK Trading package available on my github page.
With backtesting, it’s often times easy to get data for equities and ETFs. However, ETFs are fairly recent financial instruments, making it difficult to conduct longrunning backtests (most of the ETFs in inception before 2003 are equity ETFs), and with equities, they are all correlated in some way, shape, or form to their respective index (S&P 500, Russell, etc.), and their correlations generally go to 1 right as you want to be diversified.
An excellent source of diversification is the futures markets, which contain contracts on instruments ranging as far and wide as metals, forex, energies, and more. Unfortunately, futures are not continuous in nature, and data for futures are harder to find.
Thanks to Quandl, however, there is some freely available futures data. The link can be found here.
The way Quandl structures its futures is that it uses two separate time series: the first is the front month, which is the contract nearest expiry, and the second is the back month, which is the next contract. Quandl’s rolling algorithm can be found here.
In short, Quandl rolls in a very simple manner; however, it is also incorrect, for all practical purposes. The reason being is that no practical trader holds a contract to expiry. Instead, they roll said contracts sometime before the expiry of the front month, based on some metric.
This algorithm uses the open interest cross to roll from front to back month and then lags that by a day (since open interest is observed at the end of trading days), and then “rolls” back when the front month open interest overtakes back month open interest (in reality, this is the back month contract becoming the new front month contract). Furthermore, the algorithm does absolutely no adjusting to contract prices. That is, if the front month is more expensive than the back month, a long position would lose the roll premium and a short position would gain it. This is in order to prevent the introduction of a dominating trend bias. The reason that the open interest is chosen is displayed in the following graph:
This is the graph of the open interest of the front month of oil in 2000 (black time series), and the open interest of the back month contract in red. They cross under and over in repeatable fashion, making a good choice on when to roll the contract.
Let’s look at the code:
quandClean < function(stemCode, start_date=NULL, verbose=FALSE, ...) {
The arguments to the function are a stem code, a start date, end date, and two print arguments (for debugging purposes). The stem code takes the form of CHRIS/<<EXCHANGE>>_<<CONTRACT STEM>>, such as “CHRIS/CME_CL” for oil.
Require(Quandl)
if(is.null(start_date)) {start_date < Sys.Date()365*1000}
if(is.null(end_date)) {end_date < Sys.Date()+365*1000}
frontCode < paste0(stemCode, 1)
backCode < paste0(stemCode, 2)
front < Quandl(frontCode, type="xts", start_date=start_date, end_date=end_date, ...)
interestColname < colnames(front)[grep(pattern="Interest", colnames(front))]
front < front[,c("Open","High","Low","Settle","Volume",interestColname)]
colnames(front) < c("O","H","L","C","V","OI")
back < Quandl(backCode, type="xts", start_date=start_date, end_date=end_date, ...)
back < back[,c("Open","High","Low","Settle","Volume",interestColname)]
colnames(back) < c("BO","BH","BL","BS","BV","BI") #B for Back
#combine front and back for comparison
both < cbind(front,back)
This code simply fetches both futures contracts from Quandl and combines them into one xts. Although Quandl takes a type argument, I have programmed this function specifically for xts types of objects, since I will use xtsdependent functionality later.
Let's move along.
#impute NAs in open interest with 1
both$BI[is.na(both$BI)] < 1
both$OI[is.na(both$OI)] < 1
both$lagBI < lag(both$BI)
both$lagOI < lag(both$OI)
#impute bad back month openinterest prints 
#if it is truly a low quantity, it won't make a
#difference in the computation.
both$OI[both$OI==1] < both$lagOI[both$OI==1]
both$BI[both$BI==1] < both$lagBI[both$BI==1]
This is the first instance of countermeasures in the function taken to counteract messy data. This imputes any open interest NAs with the value 1, and then imputing the first NA after a non NA day with the previous day's open interest. Usually, days on which open interest is not available are days after which the contract is lightly traded, so the values that will be imputed in cases during which the contract was not traded will be negligible. However, imputing an NA value with a zero during the midst of heavy trading has the potential to display the wrong contract as the one with the higher open interest.
both$OIdiff < both$OI  both$BI
both$tracker < NA
#the formal open interest cross from front to back
both$tracker[both$OIdiff < 0] < 1
both$tracker < lag(both$tracker)
#since we have to observe OI cross, we roll next day
#any time we're not on the back contract, we're on the front contract
both$tracker[both$OIdiff > 0] < 1
both$tracker < na.locf(both$tracker)
This code sets up the system for keeping track of which contract is in use. When the difference in open interest crosses under zero, that's the formal open interest cross, and we roll a day later. On the other hand, when the open interest difference crosses back over zero, that isn't a cross. That is the back month contract becoming the front month contract. For instance, assume that you rolled to the June contract in the third week of May. Quandl would display the June contract as the back contract in May, but come June, that June contract is now the front contract instead. So therefore, there is no lag on the computation in the second instance.
frontRelevant < both[both$tracker==1, c(1:6)]
backRelevant < both[both$tracker==1, c(7:12)]
colnames(frontRelevant) < colnames(backRelevant) < c("Open","High","Low","Close","Volume","OpenInterest")
relevant < rbind(frontRelevant, backRelevant)
relevant[relevant==0] < NA
# remove any incomplete days, print a message saying
# how many removed days
# print them if desired
instrument < gsub("CHRIS/", "", stemCode)
relevant$Open[is.na(relevant$Open)] < relevant$Close[(which(is.na(relevant$Open))1)]
NAs < which(is.na(relevant$Open)  is.na(relevant$High)  is.na(relevant$Low)  is.na(relevant$Close))
if(verbose) {
if(verbose) { message(paste(instrument, "had", length(NAs), "incomplete days removed from data.")) }
print(relevant[NAs,])
}
if(length(NAs) > 0) {
relevant < relevant[NAs,]
}
Using the previous tracker variable, the code is then able to compile the relevant data for the futures contract. That is, front contract when the front contract is more heavily traded, and vice versa.
This code uses xtsdependent functionality with the rbind call. In this instance, there are two separate streams: the front month stream, and the back month stream. Through the use of xts functionality, it's possible to merge the two streams indexed by time.
Next, the code imputes all NA open values with the close (settle) from the previous trading day. In the case that opens are the only missing field, I opted for this over removing the observation entirely. Next, any observation with a missing open, high, low, or close value gets removed. This is simply my personal preference, rather than attempting to take some form of liberty with imputing data to the highs, lows, and closes based on the previous day, or some other pattern thereof.
If verbose is enabled, the function will print the actual data removed.
ATR < ATR(HLC=HLC(relevant))
#Technically somewhat cheating, but could be stated in terms of
#lag 2, 1,and 0.
#A spike is defined as a data point on Close that's more than
#5 ATRs away from both the preceding and following day.
spikes < which(abs((relevant$Closelag(relevant$Close))/ATR$atr) > 5
& abs((relevant$Closelag(relevant$Close, 1))/ATR$atr) > 5)
if(verbose) {
message(paste(instrument, "had", length(spikes),"spike days removed from data."))
print(relevant[spikes,])
}
if(length(spikes) > 0){
relevant < relevant[spikes,]
}
out < relevant
return(out)
}
Finally, some countermeasures against spiky types of data. I define a spike as a price move in the closing price which is 5 ATRs (in this case, n=14) away in either direction from both the previous and next day. Spikes are removed. After this, the code is complete.
To put this into perspective visually, here is a plot of the 30day Federal Funds rate (CHRIS/CME_FF), from 2008, demonstrating all the improvements my process makes to Quandl’s raw data in comparison to the front month continuous (current) contract.
The raw, frontmonth data is displayed in black (the long lines are missing data from quandl, displayed as zeroes, but modified in scale for the sake of the plot). The results of the algorithm are presented in blue.
At the very beginning, it’s apparent that the more intelligent rolling algorithm adapts to what would be the new contract prices sooner. Secondly, all of those long bars on which Quandl had missing data have been removed so as not to interfere with calculations. Lastly, at the very end, that downward “spike” in prices has also been dealt with, making for what appears to be a significantly more correct pricing series.
To summarize, here's what the code does:
1) Downloads the two data streams
2) Keeps track of the proper contract at all time periods
3) Imputes or removes bad data, bad data being defined as incomplete observations or spikes in the data.
The result is an xts object practically identical to one downloaded with more common to find data, such as equities or ETFs, which allows for a greater array of diversification in terms of the instruments on which to backtest trading strategies, such as with the quantstrat package.
The results of such backtests can be found on my blog, and my two R packages (this functionality will be available in my IKTrading package) can be found on my Github page.
by Daniel Hanson
Introduction
Last time, we used the discretization of a Brownian Motion process with a Monte Carlo method to simulate the returns of a single security, with the (rather strong) assumption of a fixed drift term and fixed volatility. We will return to this topic in a future article, as it relates to basic option pricing methods, which we will then expand upon.
For more advanced derivatives pricing methods, however, as well as an important topic in its own right, we will talk about implementing a term structure of interest rates using R. This will be broken up into two parts: 1) working with dates and interpolation (the subject of today’s article), and 2) calculating forward interest rates and discount factors (the topic of our next article), using the results presented below.
Working with Dates in R
The standard date objects in base R, to be honest, are not the most userfriendly when it comes to basic date calculations such as adding days, months, or years. For example, just to add, say, five years to a given date, we would need to do the following:
startDate < as.Date('20140527')
pDate < as.POSIXlt(startDate)
endDate < as.Date(paste(pDate$year + 1900 + 5, "", pDate$mon + 1, "", pDate$mday, sep = ""))
So, you’re probably asking yourself, “wouldn’t it be great if we could just add the years like this?”:
endDate < startDate + years(5) # ?
Well, the good news is that we can, by using the lubridate package. In addition, instantiating a date is also easier, simply by indicating the date format (eg ymd(.) for yearmoday) as the function. Below are some examples:
require(lubridate)
startDate < ymd(20140527)
startDate # Result is: "20140527 UTC"
anotherDate < dmy(26102013)
anotherDate # Result is: "20131026 UTC"
startDate + years(5) # Result is: "20190527 UTC"
anotherDate  years(40) # Result is: "19731026 UTC"
startDate + days(2) # Result is: "20140529 UTC"
anotherDate  months(5) # Result is: "20130526 UTC"
Remark: Note that “UTC” is appended to the end of each date, which indicates the Coordinated Universal Time time zone (the default). While it is not an issue in these examples, it will be important to specify a particular time zone when we set up our interpolated yield curve, as we shall see shortly.
Interpolation with Dates in R
When interpolating values in a time series in R, we revisit with our old friend, the xts package, which provides both linear and cubic spline interpolation. We will demonstrate this with a somewhat realistic example.
Suppose the market yield curve data on 20140514 appears on a trader’s desk as follows:
Overnight ON 0.08%
One week 1W 0.125%
One month 1M 0.15%
Two months 2M 0.20%
Three months 3M 0.255%
Six months 6M 0.35%
Nine months 9M 0.55%
One year 1Y 1.65%
Two years 2Y 2.25%
Three years 3Y 2.85%
Five years 5Y 3.10%
Seven years 7Y 3.35%
Ten years 10Y 3.65%
Fifteen years 15Y 3.95%
Twenty years 20Y 4.65%
Twentyfive years 25Y 5.15%
Thirty years 30Y 5.85%
This is typical of yield curve data, where the dates get spread out farther over time. Each rate is a zero (coupon) rate, meaning that, in US Dollar parlance, the rate paid on $1 of debt today at a given point in the future, with no intermediate coupon payments; principal is returned and interest is paid in full at the end date. In order to have a fully functional term structure  that is, to be able to calculate forward interest rates and forward discount factors off of the yield curve for any two dates  we will need to interpolate the zero rates. The date from which the time periods are measured is often referred to as the “anchor date”, and we will adopt this terminology.
To start, we will use dates generated with lubridate operations to replicate the above yield curve data schedule. We will then match the dates up with the corresponding rates and put them into an xts object. We will use 20140514 as our anchor date.
# ad = anchor date, tz = time zone
# (see http://en.wikipedia.org/wiki/List_of_tz_database_time_zones)
ad < ymd(20140514, tz = "US/Pacific")
marketDates < c(ad, ad + days(1), ad + weeks(1), ad + months(1), ad + months(2),
ad + months(3), ad + months(6), ad + months(9), ad + years(1), ad + years(2), ad + years(3), ad + years(5), ad + years(7), ad + years(10), ad + years(15),
ad + years(20), ad + years(25), ad + years(30))
# Use substring(.) to get rid of "UTC"/time zone after the dates
marketDates < as.Date(substring(marketDates, 1, 10))
# Convert percentage formats to decimal by multiplying by 0.01:
marketRates < c(0.0, 0.08, 0.125, 0.15, 0.20, 0.255, 0.35, 0.55, 1.65,
2.25, 2.85, 3.10, 3.35, 3.65, 3.95, 4.65, 5.15, 5.85) * 0.01
numRates < length(marketRates)
marketData.xts < as.xts(marketRates, order.by = marketDates)
head(marketData.xts)
# Gives us the result:
# [,1]
# 20140514 0.00000
# 20140515 0.00080
# 20140521 0.00125
# 20140614 0.00150
# 20140714 0.00200
# 20140814 0.00255
Note that in this example, we specified the time zone. This is important, as lubridate will automatically convert to your local time zone from UTC. If we hadn’t specified the time zone, then out here on the US west coast, depending on the time of day, we could get this result from the head(.) command; where the dates get converted to Pacific time; note how the dates end up shifted back one day:
[,1]
20140513 0.00000
20140514 0.00080
20140520 0.00125
20140613 0.00150
20140713 0.00200
20140813 0.00255
Some might call this a feature, and others may call it a quirk, but in any case, it is better to specify the time zone in order to get consistent results.
If we take a little trip back to our earlier post on plotting xts data (see section Using plot(.) in the xts package) a few months ago, we can have a look at a plot of our market data:
colnames(marketData.xts) < "ZeroRate"
plot(x = marketData.xts[, "ZeroRate"], xlab = "Time", ylab = "Zero Rate",
main = "Market Zero Rates 20140514", ylim = c(0.0, 0.06),
major.ticks= "years", minor.ticks = FALSE, col = "red")
From here, the next steps will be:
To create an empty xts object, we borrow an idea from the xts vignette, (Section 3.1, “Creating new data: the xts constructor”), and come up with the following function:
createEmptyTermStructureXtsLub < function(anchorDate, plusYears)
{
# anchorDate is a lubridate here:
endDate < anchorDate + years(plusYears)
numDays < endDate  anchorDate
# We need to convert anchorDate to a standard R date to use
# the "+ 0:numDays" operation.
# Also, note that we need a total of numDays + 1 in order to capture both end points.
xts.termStruct < xts(rep(NA, numDays + 1), as.Date(anchorDate) + 0:numDays)
return(xts.termStruct)
}
Then, using our anchor date ad (20140514), we generate an empty xts object going out daily for 30 years:
termStruct < createEmptyTermStructureXtsLub(ad, 30)
head(termStruct)
tail(termStruct)
# Results are (as desired):
# > head(termStruct)
# [,1]
# 20140514 NA
# 20140515 NA
# 20140516 NA
# 20140517 NA
# 20140518 NA
# 20140519 NA
# > tail(termStruct)
# [,1]
# 20440509 NA
# 20440510 NA
# 20440511 NA
# 20440512 NA
# 20440513 NA
# 20440514 NA
Next, substitute in the known rates from our market yield curve. While there is likely a slicker way to do this, a loop is transparent, easy to write, and doesn’t take all that long to execute in this case:
for(i in (1:numRates)) termStruct[marketDates[i]] < marketData.xts[marketDates[i]]
head(termStruct, 8)
tail(termStruct)
# # Results are as follows. Note that we capture the market rates
# at ON, 1W, and 30Y:
# > head(termStruct, 8)
# [,1]
# 20140514 0.00000
# 20140515 0.00080
# 20140516 NA
# 20140517 NA
# 20140518 NA
# 20140519 NA
# 20140520 NA
# 20140521 0.00125
# > tail(termStruct)
# [,1]
# 20440509 NA
# 20440510 NA
# 20440511 NA
# 20440512 NA
# 20440513 NA
# 20440514 0.0585
Finally, we use interpolation methods provided in xts to fill in the rates in between. We have two choices, either linear interpolation, using the xts function na.approx(.), or cubic spline interpolation, using the function na.spline(.). As the names suggest, these functions will replace NA values in the xts object with interpolated values. Below, we show both options:
termStruct.lin.interpolate < na.approx(termStruct)
termStruct.spline.interpolate < na.spline(termStruct, method = "hyman")
head(termStruct.lin.interpolate, 8)
head(termStruct.spline.interpolate, 8)
tail(termStruct.lin.interpolate)
tail(termStruct.spline.interpolate)
# Results are as follows. Note again that we capture the market rates
# at ON, 1W, and 30Y:
# > head(termStruct.lin.interpolate, 8)
# ZeroRate
# 20140514 0.000000
# 20140515 0.000800
# 20140516 0.000875
# 20140518 0.001025
# 20140519 0.001100
# 20140520 0.001175
# 20140521 0.001250
# > head(termStruct.spline.interpolate, 8)
# ZeroRate
# 20140514 0.0000000000
# 20140515 0.0008000000
# 20140516 0.0009895833
# 20140517 0.0011166667
# 20140518 0.0011937500
# 20140519 0.0012333333
# 20140520 0.0012479167
# 20140521 0.0012500000
# > tail(termStruct.lin.interpolate)
# ZeroRate
# 20440509 0.05848084
# 20440510 0.05848467
# 20440511 0.05848851
# 20440512 0.05849234
# 20440513 0.05849617
# 20440514 0.05850000
# > tail(termStruct.spline.interpolate)
# ZeroRate
# 20440509 0.05847347
# 20440510 0.05847877
# 20440511 0.05848407
# 20440512 0.05848938
# 20440513 0.05849469
# 20440514 0.05850000
We can also have a look at the plots of the interpolated curves. Note that the linearly interpolated curve (in green) is the same as what we saw when we did a line plot of the market rates above:
plot(x = termStruct.lin.interpolate[, "ZeroRate"], xlab = "Time", ylab = "Zero Rate", main = "Interpolated Market Zero Rates 20140514",
ylim = c(0.0, 0.06), major.ticks= "years", minor.ticks = FALSE,
col = "darkgreen")
lines(x = termStruct.spline.interpolate[, "ZeroRate"],
col = "darkblue")
legend(x = 'topleft', legend = c("Lin Interp", "Spline Interp"),
lty = 1, col = c("darkgreen", "darkblue"))
One final note: When we calculated the interpolated values using cubic splines earlier, we set method = "hyman" in the xts function na.spline(.). By doing this, we are able to preserve the monotonicity in the data points. Without it, using the default, we get dips in the curve between some of the data points, as shown here:
# Using the default method for cubic spline interpolation:
termStruct.spline.interpolate.default < na.spline(termStruct)
colnames(termStruct.spline.interpolate.default) < "ZeroRate"
plot(x = termStruct.spline.interpolate.default[, "ZeroRate"], xlab = "Time",
ylab = "Zero Rate",
main = "Interpolated Market Zero Rates 20140514 
Default Cubic Spline",
ylim = c(0.0, 0.06), major.ticks= "years",
minor.ticks = FALSE, col = "darkblue")
Summary
In this article, we have demonstrated how one can take market zero rates, place them into an xts object, and then interpolate the rates in between the data points using the xts functions for linear and cubic spline interpolation. In an upcoming post  part 2  we will discuss the essential term structure functions for calculating forward rates and forward discount factors.
For further and mathematically more detailed reading on the subject, the paper Smooth Interpolation of Zero Curves (Ken Adams, 2001) is highly recommended. A “financial cubic spline” as described in the paper would in fact be a useful option to have as a method in xts cubic spline interpolation.
by Joseph Rickert
I was very happy to have been able to attend R / Finance 2014 which wrapped up a couple of weeks ago. In general, the talks were at a very high level of play, some dealing with brand new ideas and many presented at a significant level of technical or mathematical sophistication. Fortunately, most of the slides from the presentations are quite detailed and available at the conference site. Collectively, these presentations provide a view of the boundaries of the conceptual space imagined by the leaders in quantitative finance. Some of this space covers infrastructure issues involving ideas for pushing the limits of R (Some Performance Improvements for the R Engine) or building a new infrasturcture (New Ideas for Large Network Analysis) or (Building Simple Data Caches) for example. Others are involved with new computational tools (Solving Cone Constrained Convex Programs) or attempt to push the limits on getting some actionable insight from the mathematical abstrations: (Portfolio Inference withthei One Wierd Trick) or (Twinkle twinkle litle STAR: Smooth Transition AR Models in R) for example.
But while the talks may be illuminating, the real takeaways from the conference are the R packages. These tools embody the work of the thought leaders in the field of computational finance and are the means for anyone sufficiently motivated to understand this cutting edge work. By my count, 20 of the 44 tutorials and talks given at the conference were based on a particular R package. Some of the packages listed in the following table are wellestablished and others are workinprogress sitting out on RForge or GitHub, providing opportunities for the interested to get involved.
R Finance 2014 Talk 
Package 
Description 
Introduction to data.table 
Extension of the data frame 

An ExampleDriven Handson introduction to Rcpp 
Functions to facilitate integrating R with C++ 

Portfolio Optimization: Utility, Computation, Equities Applications 
Environment for reaching Financial Engineering and Computational Finance 

ReEvaluation of the Low Risk Anomaly via Matching 
Implementation of the Coarsened Exact matching Algorithm 

BCP Stability Analytics: New Directions in Tactical Asset Management 
Bayesian Analysis of Change Point Problems 

On the Persistence of Cointegration in Pairs Trading 
EngleGranger Cointegration Models 

Tests for Robust Versus Least Squares Factor Model Fits 
robust methods 

The R Package cccp: Solving Cone Constrained Convex Programs 
Solver for convex problems for cone constraints 

Twinkle, twinkle little STAR: Smooth Transition AR Models in R 
Modeling smooth transition models 

Asset Allocaton with Higher Order Moments and Factor Models 
Global optimization by differential evolution / Numerical methods for portfolio optimization 

Event Studies in R 
Event study and extreme event analysis 

An R package on Credit Default Swaps 
Provides tools for pricing credit default swaps 

New Ideas for Large Network Analysis, Implemented in R 
Implicitly restarted Lanczos methods for R 

Package “Intermediate and Long Memory Time Series 

Simulate & Detect Intermediate and Long Memory Processes / in development 
Stochvol: Dealing with Stochastic Volatility in Time Series 
Efficient Bayesian Inference for Stochastic Volatility (SV) Models 

Divide and Recombine for the Analysis of Large Complex Data with R 
Package for using R with Hadoop 

gpusvcalibration: Fast Stochastic Volatility Model Calibration using GPUs 
Fast calibration of stochastic volatility models for option pricing models 

The FlexBayes Package 
Provides an MCMC engine for the class of hierarchical feneralized linear models and connections to WinBUGS and OpenBUGS 

Building Simple Redis Data Caches 
Rcpp bindings for Redis that connects R to the Redis key/value store 

Package pbo: Probability of Backtest Overfitting 
Uses Combinatorial Symmetric Cross Validation to implement performance tests. 
Many of these packages / projects also have supplementary material that is worth chasing down. Be sure to take a look at Alexios Ghalanos recent post that provides an accessible introduction to his stellar keynote address.
Many thanks to the organizers of the conference who, once again, did a superb job, and to the many professionals attending who graciously attempted to explain their ideas to a dilletante. My impression was that most of the attendies thoroughly enjoyed themselves and that the general sentiment was expressed by the last slide of Stephen Rush's presentation:
by Joseph Rickert
R/Finance 2014 is just about a week away. Over the past four or five years this has become my favorite conference. It is small (300 people this year), exceptionally wellrun, and always offers an eclectic mix of theoretical mathematics, efficient, practical computing, industry best practices and trading “street smarts”. This clip of Blair Hull delivering a keynote speech at R/Finance 2012 is an example of the latter. It ought to resonate with anyone who has followed some of the hype surrounding Michael Lewis recent book Flash Boys.
In any event, I thought it would be a good time to look at the relationship between R and Finance and to highlight some resources that are available to students, quants and data scientists looking to do computational finance with R.
First off, consider what computational finance has done for R. From the point of view of the development and growth of the R language, I think it is pretty clear that computational finance has played the role of the ultimate “Killer App” for R. This high stakes, competitive environment where a theoretical edge or a marginal computational advantage can mean big rewards has led to R package development in several areas including time series, optimization, portfolio analysis, risk management, high performance computing and big data. Additionally, challenges and crisis in the financial markets have helped accelerate R’s growth into big data. In this podcast, Michael Kane talks about the analysis of the 2010 Flash Crash he did with Casey King and Richard Holowczak and describes using R with large financial datasets.
Conversely, I think that it is also clear that R has done quite a bit to further computational finance. R’s ability to facilitate rapid data analysis and visualization, its great number of available functions and algorithms and the ease with which it can interface to new data sources and other computing environments has made it a flexible tool that evolves and adapts at a pace that matches developments in the financial industry. The list of packages in the Finance Task View on CRAN indicates the symbiotic relationship between the development of R and the needs of those working in computational finance. On the one hand, there are over 70 packages under the headings Finance and Risk Management that were presumably developed to directly respond to a problem in computational finance. But, the task view also mentions that packages in the Econometrics, Multivariate, Optimization, Robust, SocialSciences and TimeSeries task views may also be useful to anyone working in computational finance. (The High Performance Computing and Machine Learning task views should probably also be mentioned.) The point is that while a good bit of R is useful to problems in computational finance, R has greatly benefited from the contributions of the computational finance community.
If you are just getting started with R and computational finance have a look at John Nolan’s R as a Tool in Computational Finance. Other resources for R and computational finance that you may find helpful are::
Package Vignettes
Several of the Finance related packages have very informative vignettes or associated websites. For example have a look at those for the packages portfolio, rugarch, rquantlib (check out the cool rotating distributions), PerformanceAnalytics, and MarkowitzR.
Data
Quandl has become a major source for financial data, which can be easily accessed from R.
Websites
Relevant websites include the RMetrics site, The R Trader, Burns Statistics and Guy Yollin’s repository of presentations
YouTube
Three videos that.I found to be particularly interesting are recordings of the presentations “Finance with R” by Ronald Hochreiter, “Using R in Academic Finance” by Sanjiv Das and Portfolio Construction in R by Elliot Norma.
Blogs
Over the past couple of years, RBloggers has posted quite a few finance related applications. Prominent among these is the series on Quantitative Finance Applications in R by Daniel Harrison on the Revolutions Blog.
Books
Books on R and Finance include the excellent RMetrics series of ebooks, Statistics and Data Analysis for Financial Engineering by David Ruppert, Financial Risk Modeling and Portfolio Optimization with R by Bernard Pfaff, Introduction to R for Quantitative Finance by Daróczi et al. and a brand new title Computational Finance: An Introductory Course with R by Agrimiro Arratia.
Coursera
This August, Eric Zivot will teach the course Introduction to Computational Finance and Financial Econometrics which will emphasize R.
The R Journal
The R Journal frequently publishes finance related papers. The present issue: Volume 5/2, December 2013 contains three relevant papers. Performance Attribution for Equity Portfolios by Yang Lu, David Kane, Temporal Disaggregation of Time Series by Christoph Sax, Peter Steiner, and betategarch: Simulation, Estimation and Forecasting of BetaSkewtEGARCH Models by Genaro Sucarrat.
Conferences
in addition to R/Finance (Chicago) and useR!2014 (Los Angeles) look for R based, computational finance expertise at the 8th R/RMetrics Workshop (Paris).
Community
RSigFinance is one of R’s most active special interest groups.