This is a sequel to the recent post on smile dynamics, generalizing things a bit. It’s just a stream of thoughts, nothing that is actually backed by own experience or anything. Hopefully it makes sense nevertheless.

The PL of a interest rate derivative portfolio reacts on movements in market data like broker quotes for FRAs, Euribor futures, swaps, implied volatilities for caps / floors and swaptions. These quotes are processed into rate curves and volatility surfaces, which are then fed into pricing models which value the portfolio and thereby produce its PL.

A trader is interested in how his portfolio reacts on market movements, i.e. in its greeks, which are by definition partial derivatives of the portfolio value by the single market data inputs. So a portfolio may for example be particularly sensitive on the 10y and 15y Euribor 6M swap quotes and on the 5y/15y and 10y/10y swaption volatilities. The trader would see a large interest rate delta on these two buckets of the 6M curve and a large vega in these two cells of the vega matrix for swaptions.

The input variables are by far not independent. For example if the 10y swap quote rises by 10 basis points, it is likely that the 15y swap quote rises too, and often even roughly by the same amount. Similarly, movements of the cells of implied volatility surfaces are correlated, in particular if their distance is not too big. Even more, an interest rate move will possibly affect the implied volatility, too, in a systematic way: If rates rise, typically lognormal volatilities will decrease, so to roughly keep the product of the forward level and the lognormal volatility (which is by definition the so called (approximate) basispoint volatility) constant. The dependency is obviously dependent on the nature of the volatility, so it will be quite different for normal implied volatilities.

Furthermore we have *technical* dependencies. Say you work with sensitivites on zero rates instead of market quotes. Then a movement in the EONIA zero rates will affect the Euribor 6M forward curve, because this curve is bootstrapped using the EONIA curve as a discounting curve. So a sensitivity on the EONIA curve might come from a direct dependency of a deal on the EONIA curve, or also from an indirect dependency via forward curves.

So the abstract picture is that you have tons of correlated data points as an input, a very long gradient vector describing the sensitivity of the PL to the input, and maybe also a big Hessian matrix describing second order effects.

Wouldn’t it be reasonable to apply a principal component analysis first, select the most important risk drivers and only manage the sensitivities on these drivers ? Conceptually this is nothing more that a suitable rotation of coordinates such that the new coordinates are independent. Once this is done you can sort the dimensions by the ratio of the total variance of the input data they carry and focus on the most important ones. Which may be only 5 to 10 maybe instead of hundreds on input data points.

Hedging would be be more efficient, because you could hedge the same amount of PL variance with fewer hedge positions. The main risks drivers could be summarized in terms of only a few numbers for management reporting. Sensitivity limits can be put on these main drivers. PL could be explained accurately by movements and sensitivites of only a few quantities. Stress tests could be formulated directly in the new coordinates, yielding more realistic total scenarios, since natural co-movements of other variables not directly stressed are automatically accounted for (e.g. huge rate shifts would imply appropriate adjustments to volatilities and keep them realisitic).

If you think about it, the kind of interest rate delta you see for swaptions and caps / floors this way would include a historically calibrated smile dynamics. Which you could compare to the dynamics we discussed last time. For example.

Also, typically the new coordinates allow for a natural interpretation like parallel movements of curves, rotations and so on, so the view wouldn’t necessarily be more abstract than the original one.

What is needed is the coordinate transformation layer, ideally directly in the front office system, so that it can be used in real-time. Plus a data history has to be collected and maintained, so that the principal components can be updated on a regular basis.