XVA for Bermudan Swaptions

Welcome back. Sorry for the hiatus which was longer than planned. Anyway and finally here is the (presumably) last episode in the series of posts on proxy pricing of exotics for XVA simulations. In my last posts I wrote a bit on the general design principles I came up with and how I used it to implement the FX TaRF. Today I will describe how to do similar things for bermudan swaptions using the same design principles.

Some general thoughts beforehand though. Reflections on what we are doing here at all. What we have is some model M in which we want to compute some XVA or potential exposure figure \Xi for our exotic deal I. Possibly and probably together with a whole portfolio of other deals which interact in a non linear way, i.e.

\Xi( \sum_i I_i ) \neq \sum_i \Xi( I_i)

To do this we need NPVs \nu of each I_i at some time t conditional on some model state m(t) in our model M:

\nu = \nu(I_i,t,m(t))

M might be operated in a real world measure (e.g. for exposure measurement) or a risk neutral measure (e.g. for CVA calculation), dependent on the nature of the figure to compute. The model state m(t) implies a market data scenario like certain discount and forwarding term structures and swaption or cap volatility surfaces and the like.

Ideally the NPV \nu is computed in the same pricing model we use for our regular PL computation or trading activities, calibrated to the market data scenario created by M in the way we would do it for usual pricing. That means in general we have an “outer” model M and an “inner” model N_i for each instrument I_i. Even if the instruments all allow for the same model for the inner evaluation, they will probably be different in their calibrated parameters:

For example if we have a portfolio composed of single currency swaps, callable swaps and swaptions we could choose the Hull White model to evaluate the callable swaps and swaptions and price the swaps directly on the yield curves. But each callable swap / swaption would in general require a different set of calibration instruments, so we would have a set of local, inconsistent models for each individual swaption valuation. We could try to use a global model in which all instruments can be priced consistently and maybe use this model as the outer model for XVA calculation at the same time, provided that the XVA figure \Xi allows for risk neutral valuation at all.

This could be a Libor market model which is one of the most flexible models in terms of global calibration. The global calibration alone would be a delicate task however. If more than one currency is involved and maybe even other asset classes the task gets even more demanding. And what do you do if you want a real world exposure figure? This approach seems quite heavy, not impossible, but heavy.

So it seems reasonable to stick with the concept of separate outer and inner models in general. The question is then how to efficiently compute the inner single NPVs \nu(I_i,t,m(t)) conditional on market scenarios generated by the outer model. Full pricing is not an option even for relatively simple inner models like Hull White one factor models.

Another idea could be to compute some reference NPVs in time direction t assuming some arbitrary market data scenarios (e.g. simply using today’s market data rolled forward), together with a “rich” set of sensitivities which allow to estimate NPVs under arbitrary market scenarios by Taylor approximation and interpolating in time direction (or use a theta sensitivity). Maybe automatic differentiation comes into play here.

The idea I followed in the previous posts is different, more simple, and goes like this. During an initial single deal pricing performed by a monte carlo simulation we collect path data and build a regression model of conditional NPVs on the model state, pricing time and possibly more ancillary variables (like the accumulated amount in the case of the FX TaRF) just as it is done in the Longstaff Schwartz method for early exercise pricing (or almost like this, see below). Then, when asked for an NPV at a pricing time t under a given market scenario m(t) (and possibly more conditions like the already accumulated amount of a target structure) we imply the model state belonging to the market scenario and use our regression model to produce the desired NPV very efficiently.

The limitation of this method is that the given market data can not be replicated in full beauty in general. In case of the FX TaRF we could replicate any given FX spot rate, but not its volatility which is fixed and implied by the original pricing model for example. In case of bermudan swaptions in the Hull White model we will see in this post that we can replicate one given reference rate, i.e. the general level of the yield curve, but not the shape of the curve, the spread to other yield curves involved and in particular not the volatility structure, which are all fixed again and implied by the model used for initial pricing.

Actually what can be replicated is exactly corresponding to what is explicitly modelled as a stochastic quantity in the model: Since for the TaRF we used a Black model, possibly with local volatility but not with a stochastic factor for the volatility, the volatility is fixed. The FX spot on the other hand as the explicitly modelled quantity can be replicated to be any desired value.

In case of the Hull White model we can imply the level of the yield curve, since this is modelled through the one factor in the model, but we can not match a given shape, since the model does not have the flexibility change the shape of the curve (the shape does change in fact, but in a fixed manner, not driven by stochastic factors in the model). The same with the volatility structure, it is implied by the initial pricing model again.

If one accepts these limitations the method offers a very fast and handy way to approximate the desired scenario NPVs however.

Let’s come to an example how this works for bermudan swaptions. I’d like to explain this going through an example I wrote. We start like this

// Original evaluation date and rate level

Real rateLevelOrig = 0.02;
Date refDateOrig(12, January, 2015);
Settings::instance().evaluationDate() = refDateOrig;

// the yield term structure for the original pricing
// this must _not_ be floating, see the warning in
// the proxy engine's header.

Handle<YieldTermStructure> ytsOrig(boost::make_shared<FlatForward>(
         refDateOrig, rateLevelOrig, Actual365Fixed()));

The original pricing date is 12-Jan-2015 and the yield term structure on this date is at 2\% flat. We set QuantLib’s original pricing date to this date and construct the yield term structure.

One important point is that the yield term structure has a fixed reference date, i.e. it does not change when the evaluation date changes. The reason is that this yield term structure will be linked to the initial pricing model which in turn will be used in the proxy pricing engine later on, which finally relies on the fact that the model does not change when shifting the evaluation date for scenario pricing.

It is like having a fixed frame, as seen from the outer model described above, within which we do local pricings at different times and under different market scenarios, but relying on the frame surronding it to not change. Likewise the rates describing the yield term structure should not change, nor should the initial pricing model be recalibrated or change its parameters during the scenario pricings. The pricing model will be a Hull White model

// the gsr model in T-forward measure, T=50 chosen arbitrary here
// the first model uses the fixed yts, used for the mc pricing
// generating the proxy

boost::shared_ptr<Gsr> gsrFixed = boost::make_shared<Gsr>(
            ytsOrig, stepDates, sigmas, reversion, 50.0);

The monte carlo pricing engine will be set up like this

boost::shared_ptr<PricingEngine> mcEngine =
     MakeMcGaussian1dNonstandardSwaptionEngine<>(gsrFixed)
          .withSteps(1) // the gsr model allows for large steps
          .withSamples(10000)
          .withSeed(42)
          .withCalibrationSamples(10000)
          .withProxy(true);

We do not need a fine time grid since the stochastic process belonging to the GSR model allows for exact evolution over arbitrary large time intervals. We use 10000 steps to calibrate the Longstaff-Schwartz regression models and also for the final pricing. The seed is 42, why not. The last attribute says that we want not only an usual pricing of the swaption, but also generate a proxy information object which can be used for scenario pricing later on. Since the generation consumes some additional time, it is optional to do it.

Now we can do a good old pricing on our instrument swaption2 (I skipped how this was created).

  swaption2->setPricingEngine(mcEngine);
  Real npvOrigMc = swaption2->NPV();
  Real errorOrigMc = swaption2->errorEstimate();

What I also did not show is that I created a reference integral engine to verify the prices of the mc engine and later on of the proxy engine also. The reference engine relies on a floating yield term structure, both with respect to the reference date and the rate level. The same holds for the proxy engine, so we can move forward in time, change the rate level and compare the pricings in the two engines.

The result of the inital pricing is

Bermudan swaption proxy pricing example
Pricing results on the original reference date (January 12th, 2015):
Integral engine npv = 0.0467257 (timing: 97723mus)
MC       engine npv = 0.0459135 error estimate 0.000809767 (timing: 5.55552e+06mus)

The reference NPV from the integral engine is 467bp, slightly higher than the monte carlo price 459bp. This is perfectly expected since the regression model based approximate exercise decisions are sub-optimal, so the option NPV will be underestimated systematically. Together will the error estimate (one standard deviation) of 8bp this all looks quite ok.

One word about the computation times. The monte carlo simulation (10000 paths both for calibration and pricing) takes 5.5 seconds. This is somewhat in the expected region and I am completely relying on the standard monte carlo framework of the library here. The integral engine on the other hand takes 97 milli seconds, which is quite slow. This can easily be brought down to 5 milliseconds just by using less integration points and covered standard deviations (here I used 129 points and 7 standard deviations), without loosing too much accuracy. This seems quite ok, since neither the GSR model nor the integral engine are much optimized for speed currently.

To create the proxy engine we can say

  boost::shared_ptr<PricingEngine> proxyEngine =
         boost::make_shared<ProxyNonstandardSwaptionEngine>(
              swaption2->proxy(), rateLevelRef, maturityRef, 64, 7.0, true);

where we just pass the proxy result from the pricing above together with quotes representing the rate level of the scenario pricing and the maturity to which the rate level belongs. Remember that we can only prescribe the level, but not the shape of the yield curve in the scenario pricing. So I allowed to chose a maturity, e.g. 10 years, and prescribe the (continuously compounded) zero yield w.r.t. this maturity. The proxy engine will then imply the Hull White’s model state such that this rate level is matched for the given maturity. The shape of the yield curve is implied by the initial model though and can not be changed. I am repeating myself, ain’t I ? Do I ? Repeat ? God.

The next two parameters refer to the way we do the scnenario pricing between two of the original structure’s exexercise dates: We determine the next available exercise date (if any) and build a grid on the exercise date covering n standard deviations (here 5.0) of the state variable around its mean, all this conditional on the pricing time and the model state (which is implied by the rate level as explained above). We then compute the NPV on each of these grid points using the regression model from the initial monte carlo pricing. Finally we interpolate between the points using cubic splines and integrate the resulting function against the state variable’s density (which is possibly in closed form by the way).

Let’s see what we get under some scenarios for the evaluation date and rates. One scenario is simply created by e.g. saying

 Settings::instance().evaluationDate() = Date(12, June, 2015);
 rateLevelRefQuote->setValue(0.02); 
 maturityRefQuote->setValue(10.5);  

which moves the evaluation date half a year ahead to 12-Jun-2015 and requires the 10.5y zero rate (which corresponds to the maturity of the swaption as the reference point) to be 2%. For the scenarios in the example we get

Pricing results on June 12th, 2015, reference rate 0.02 with maturity 10.5
Integral engine npv = 0.0444243
Proxy    engine npv = 0.0430997 (timing: 127mus)

Pricing results on June 11th, 2019, reference rate 0.025 with maturity 6.5
Integral engine npv = 0.0372685
Proxy    engine npv = 0.0361514 (timing: 170mus)

Pricing results on June 11th, 2019, reference rate 0.02 with maturity 6.5
Integral engine npv = 0.0223442
Proxy    engine npv = 0.0223565 (timing: 159mus)

Pricing results on June 11th, 2019, reference rate 0.015 with maturity 6.5
Integral engine npv = 0.0128876
Proxy    engine npv = 0.0141377 (timing: 169mus)

Pricing results on January 11th, 2020, reference rate 0.02 with maturity 6
Integral engine npv = 0.0193142
Proxy    engine npv = 0.0194446 (timing: 201mus)

Pricing results on January 11th, 2021, reference rate 0.02 with maturity 5
Integral engine npv = 0.0145542
Proxy    engine npv = 0.0137726 (timing: 208mus)

Pricing results on June 11th, 2024, reference rate 0.02 with maturity 1.5
Integral engine npv = 0.00222622
Proxy    engine npv = 0.00292998 (timing: 282mus)

which looks not too bad. The proxy engine’s computation time is also not bad, around 100-300 microseconds. Again, this can be made faster by using less integration points, but already seems competitive enough for XVA simulations.

The proxy regression model deserves some comments. It is not just the Longstaff-Schwartz regression model, since this only looks at states where the exercise value is positive. This is essential for a good fit of the regression function (which I chose to be simply a quadratic function), since left from the exercise point the option value (which equals the continuation value in this area) flattens and would destroy the global fit.

What I did until now is just calibrate two separate models, one in the region corresponding to positive exercise values and another in the complementary region. This is similar to the regression model for the FX TaRF, where I also used two quadratic polynomials, if you remember (if not, you can read about it here or more detailled in the paper).

This solution is not perfect and can be improved probably. The following picture shows the regression models from the example. The different exercise dates are distinguished by color (0 denotes the first exercise date, in dark color, 1 the second, and so on until 9 which denotes the 10th exercise date, in bright color).

lsswaption

What you can see is the discontinuity at the cutoff point between exercise and non-exercise area. It is exactly this jump that one should work on I guess. But not now.

In technical terms I relied on Klaus’ implementation of the Longstaff Schwartz algorithm in longstaffschwarzpathpricer.hpp. This needed some amendments to allow for non-american call rights which it was designed for I believe. Apart of this I mainly inserted a hook in the pricer

virtual void post_processing(const Size i,
                             const std::vector<StateType> &state,
                             const std::vector<Real> &price,
                             const std::vector<Real> &exercise) {}

with an empty default implementation. This function is called during the calibration phase, passing the collected data to a potential consumer, which can do useful things with it like building a regression model for proxy pricing like in our use case here. To do this I derived from Klaus’ class as follows

class LongstaffSchwartzProxyPathPricer
    : public LongstaffSchwartzPathPricer<typename SingleVariate<>::path_type> {
  public:
    LongstaffSchwartzProxyPathPricer(
        const TimeGrid &times,
        const boost::shared_ptr<EarlyExercisePathPricer<PathType> > &pricer,
        const boost::shared_ptr<YieldTermStructure> &termStructure);

    const std::vector<boost::function1<Real, StateType> > basisSystem() const {
        return v_;
    }
    const std::vector<Array> coefficientsItm() const { return coeffItm_; }
    const std::vector<Array> coefficientsOtm() const { return coeffOtm_; }
    const StateType cutoff() const { return cutoff_; }

which offers inspectors for the two regression models coefficients, the cutoff point separating them and the underlying function system. The implementation of the hook function is simple

void LongstaffSchwartzProxyPathPricer::post_processing(
    const Size i, const std::vector<StateType> &state,
    const std::vector<Real> &price, const std::vector<Real> &exercise) {

    std::vector<StateType> x_itm, x_otm;
    std::vector<Real> y_itm, y_otm;

    cutoff_ = -QL_MAX_REAL;

    for (Size j = 0; j < state.size(); ++j) {
        if (exercise[j] > 0.0) {
            x_itm.push_back(state[j]);
            y_itm.push_back(price[j]);
        } else {
            x_otm.push_back(state[j]);
            y_otm.push_back(price[j]);
            if(state[j]>cutoff_)
                cutoff_ = state[j];
        }
    }

    if (v_.size() <= x_itm.size()) {
        coeffItm_[i - 1] =
            GeneralLinearLeastSquares(x_itm, y_itm, v_).coefficients();
    } else {
        // see longstaffschwartzpricer.hpp
        coeffItm_[i - 1] = Array(v_.size(), 0.0);
    }

    if (v_.size() <= x_otm.size()) {
        coeffOtm_[i - 1] =
            GeneralLinearLeastSquares(x_otm, y_otm, v_).coefficients();
    } else {
        // see longstaffschwartzpricer.hpp
        coeffOtm_[i - 1] = Array(v_.size(), 0.0);
    }
}

and does nothing more than setting up the two data sets on which we estimate the regression models.

The coefficients are read in the monte carlo engine and serve as the essential part in the proxy object which can then be passed to the proxy engine. And we are done. You can find the full example code here.

Advertisements
XVA for Bermudan Swaptions

Pricing Engine Design for exotic XVA simulations (using the example of FX TARFs)

In my previous post I wrote about some ideas to efficiently approximate the value of a fx exotic (a fx tarf in fact). One main motivation is to use such a fast pricing in a XVA simulation.

tarf3

This post is dedicated to the design I came up with to fit the idea as accurately as possible into the existing QuantLib architecture.

The next post will then present details about the approximation scheme itself, some numerical examples comparing the approximation with a full pricing under several market and time decay scenarios, and performance tests.

Good design is of utmost importance. Even in a prosperous neighbourhood a broken window, not swiftly repaired, will soon lead to a second and after a while a degenerate area. Luckily Luigi would not allow this to happen in QuantLib city of course.

These are my thoughts on the design in general:

  • we should have two pricing engines, one MC and one Proxy engine. We should not have only one engine with some additional methods to extract the approximate NPVs in a proprietary way
  • the proxy engine should behave just like any other engine, e.g. the time decay should consistently be deduced from the global evaluation date, not through some special parameter, and the relevant market data should be given by standard structures
  • the implementation of new instruments and pricing engines following the same idea should be easy and be based on a common interface
  • the end user interface used in client code should be easy to use and foolproof
  • XVA simulation is an application of the proxy engine, but there is no strict connection to this use case – you can also use the proxy engine “just” to compute npvs faster if high accuracy is not required

Or in short: There should be nothing special about the whole thing. Curb your enthusiasm, just do a plain solid job, don’t try to impress anyone. Okay. Let’s start with the instrument class.

FxTarf(const Schedule schedule, 
       const boost::shared_ptr<FxIndex> &index,
       const Real sourceNominal,
       const boost::shared_ptr<StrikedTypePayoff> &shortPositionPayoff,
       const boost::shared_ptr<StrikedTypePayoff> &longPositionPayoff,
       const Real target, 
       const CouponType couponType = capped,
       const Real shortPositionGearing = 1.0,
       const Real longPositionGearing = 1.0,
       const Handle<Quote> accumulatedAmount = Handle<Quote>(),
       const Handle<Quote> lastAmount = Handle<Quote>());

The constructor takes

  • the structure’s schedule (meaning the value dates),
  • the underlying fx index representing for example an ECB fx fixing (this is a new class too, because there doesn’t seem to be a fx index in QuantLib yet, but I do not go into details about that here),
  • the nominal of the structure (in foreign, asset or source currency, there are many names for that)
  • the counterparties’ payoff profiles, which are normally short puts and long calls, all sharing the same strike
  • the target level where the structure knocks out
  • the coupon type (see the previous post I linked above)
  • the gearing of the two sides of the deal

The last two parameters accumulatedAmount and lastAmount are optional. If not given, the FxTarf computes the already accumulated amount reading the index’s historic fixings.

If specified on the other hand, historic fixings are ignored and the given accumulated amount is used. The lastAmount in this context is needed only in case the last fixing already occured, but the associated payment is still in the future. The reason to introduce these somewhat redundant parameters is as follows. On one hand it may be handy not to set all historic fixings for the fx index, but directly set the accumulated amount. Maybe you get the deal information from some source system and it already provides the current accumulated amount along with the deal data. More importantly, during a XVA simulation you might not want to set all fixings in the IndexManager. You can do that, if you like, but it may not be handy, because after each path you have to erase all fixings again or maybe you do not simulate each fixing date even and just want to interpolate the accumulated fixings. In any case it is just a convenience parameter. Use it or just ignore it.

For a full pricing of the tarf we can use a monte carlo engine which is constructed in the usual way, for example

boost::shared_ptr<PricingEngine> mcEngine 
    = MakeMcFxTarfEngine<>(gkProcess)
      .withStepsPerYear(steps)
      .withSamples(samples)
      .withSeed(42)
      .withProxy();

The parameters here are the (generalized) Black Scholes process, the time steps to simulate per year, the seed for the RNG.

The last modifier .withProxy() is the only special thing here: By default the engine will just compute an npv (and error estimate) as any other mc engine does. If the engine is set up with the proxy flag on the other hand, additional information during the simulation is collected and analyzed to produce some proxy information object that can be used later for approximate pricings. We will see in a minute how.

We need this modifier because the simulation gets slower when creating the proxy, so we should be able to switch it off.

Now we set the engine and can compute the (full) npv:

tarf.setPricingEngine(mcEngine);
std::cout << "MCEngine NPV = " << tarf.NPV() << " error estimate = "
          << tarf.errorEstimate() << std::endl;

If we just want the proxy pricing, we can of course skip the full npv calculation, but it doesn’t hurt since the npv is always produced, even if only calculating the information for the proxy engine.

To use the proxy engine we have to start with an engine which can produce this piece of information. The proxy engine is then feeded with the proxy object:

boost::shared_ptr<PricingEngine> proxyEngine =
    boost::make_shared<ProxyFxTarfEngine>(tarf.proxy(), 
                                          fxQuote, 
                                          gkProcess->riskFreeRate());
tarf.setPricingEngine(proxyEngine);
std::cout << "ProxyEngine NPV = " << tarf.NPV() << std::endl;

The proxy engine is constructed with

  • the proxy description produced by the mc engine, which can be retrieved from the instrument by .proxy() (this is a special result, which seems important enough not to bury it in the additional results heap — this is nothing innovative though, it’s like the bps or fair rate for swaps as an example)
  • the fx quote used for the pricing – i.e. the essential market data needed for the proxy pricing
  • the discount curve for the pricing (which is taken as the domestic rate curve from our black scholes process in our example client code)

In addtion the global evaluation date will determine the reference date for the valuation, just as usual.

If the instrument does not provide a proxy object (e.g. because the mc engine was told so, see above), or if the proxy object is not suitable for the proxy engine to be constructed, an exception will be thrown.

What is going on behind the scenes: I added an interface for instruments that are capable of proxy pricing:

class ProxyInstrument {
  public:
    //! Base class proxy descriptions for approximate pricing engines
    struct ProxyDescription {
        // check if proxy description is valid
        virtual void validate() const = 0;
    };

    virtual boost::shared_ptr<ProxyDescription> proxy() const = 0;
};

The only method to implement is proxy which should return an object containing the information necessary for a compatible proxy engine to compute approximate npvs (see below for what compatible means). The information itself should be an object derived from ProxyInstrument::ProxyDescription. It has to provide a validate method that should check the provided data for consistency.

How is the fx tarf instrument implemented w.r.t. this interface:

class FxTarf : public Instrument, public ProxyInstrument {
  public:
    //! proxy description
    struct Proxy : ProxyDescription {
        struct ProxyFunction {
            virtual Real operator()(const Real spot) const = 0;
        };
        // maximum number of open fixings
        Size maxNumberOpenFixings;
        // last payment date, the npvs are forward npvs w.r.t. this date
        Date lastPaymentDate;
        // buckets for accumulated amonut, e.g.
        // 0.0, 0.1, 0.2, 0.3, 0.4 means
        // [0.0,0.1) has index 0
        // [0.1,0.2) has index 1
        // ...
        // [0.4,target] has index 4
        std::vector<Real> accBucketLimits;
        // proxy functions
        // first index is openFixings-1
        // second index is accAmountIndex
        // A function F should implement
        // operator()(Real spot) = npv
        std::vector<std::vector<boost::shared_ptr<ProxyFunction> > > functions;
        void validate() const {
            QL_REQUIRE(functions.size() == maxNumberOpenFixings,
                       "maximum number of open fixings ("
                           << maxNumberOpenFixings
                           << ") must be equal to function rows ("
                           << functions.size() << ")");
            for (Size i = 0; i < functions.size(); ++i) {
                QL_REQUIRE(functions[i].size() == accBucketLimits.size(),
                           "number of acc amount buckets ("
                               << accBucketLimits.size()
                               << ") must be equal to function columns ("
                               << functions[i].size() << ") in row " << i);
            }
        }
    };
/* ... */

This says that the specific (or one possible) proxy information of a fx tarf consists of some descriptive data, which is

  • the maximum number of open (future) fixings
  • the last payment date of the structure – see below
  • a bucketing of the accumulated amount

which together define a segmentation for the approximating function. On each segment the approximating function is then given by a Proxy::ProxyFunction which is at this level of abstraction just an arbitrary function Real to Real, returning the npv for a given fx spot. The npv is expressed on forward basis as of the last payment date of the structure, so that the proxy engine can discount back from this latest possible pricing date to the evaluation date. Remember that this is in general (and typically) different from the one used for the mc pricing.

The validate method checks if the function matrix is filled consistently with the segmentation information.

The proxy object is part of the results class for the instrument, which is again just using the standard formalism:

class FxTarf::results : public Instrument::results {
  public:
    void reset();
    boost::shared_ptr<FxTarf::Proxy> proxy;
};

The proxy engine expects a proxy object, which is checked to be one that is useful for engine is in the constructor, using a dynamic downcast.

ProxyFxTarfEngine(
    boost::shared_ptr<ProxyInstrument::ProxyDescription> proxy,
    Handle<Quote> exchangeRate, Handle<YieldTermStructure> discount)
    : FxTarfEngine(discount), exchangeRate_(exchangeRate) {
    registerWith(exchangeRate_);
    proxy_ = boost::dynamic_pointer_cast<FxTarf::Proxy>(proxy);

    QL_REQUIRE(proxy_, "no FxTarf::Proxy given");

}

The third level of specialization is in the monte carlo engine, where the specific proxy object’s function type is implemented, derived from the definitions in the FxTarf class:

template <class RNG = PseudoRandom, class S = Statistics>
class McFxTarfEngine : public FxTarfEngine,
                       public McSimulation<SingleVariate, RNG, S> {
  public:
    /*! proxy function giving a function spot => npv for one segment
        (bucket accumulated amount, number of open fixings)
        the function is given by two quadratic polynomials on intervals
        (-\infty,cutoff] and (cutoff,\infty).
        Only the ascending (long calls) or descending (long puts) branch
        is used and then extrapolated flat.
    */
    class QuadraticProxyFunction : public FxTarf::Proxy::ProxyFunction {
      public:
        QuadraticProxyFunction(Option::Type, const Real cutoff, const Real a1,
                               const Real b1, const Real c1, const Real a2,
                               const Real b2, const Real c2);
        Real operator()(const Real spot) const;

      private:
        Option::Type type_;
        const Real a1_, b1_, c1_, a2_, b2_, c2_;
        const Real cutoff_;
        int flatExtrapolationType1_,
            flatExtrapolationType2_; // +1 = right, -1 = left
        Real extrapolationPoint1_, extrapolationPoint2_;
    };

At this place we finally fix the specific form of the approximating functions, which are in essence given by two quadratic polynomials. I will give more details on and a motiviation for this in the next post.

Finally it appeared useful to derive the mc and proxy engines from a common base engine, which handles some trivial boundary cases (like all fixings are done or the determination of the npv of a fixed amount that is not yet settled), so we have the hierachy

class FxTarfEngine : public FxTarf::engine {}

template <class RNG = PseudoRandom, class S = Statistics>
class McFxTarfEngine : public FxTarfEngine,
                       public McSimulation<SingleVariate, RNG, S> {}

class ProxyFxTarfEngine : public FxTarfEngine {}

If you are interested in the code you can look into my repository. It may already work, but I did not test everything yet. More on this next week.

Pricing Engine Design for exotic XVA simulations (using the example of FX TARFs)

Fast Pricing of FX TARFs for XVA Calcuation

Let’s play a game. We flip a coin ten times (here is a particularly nice way to do this – you can take the Greek 2 Euro coin for example, it has Εὐρώπη (both the goddess and the continent) on it). If it is heads I pay you one Euro. If it is tails you pay me two. Oh and if you should win more than three times while we are playing we just stop the whole thing, ok ?

A fx tarf is a sequence of fx forward trades where our counterpart pays a strike rate. If the single forward is in favour of the counterpart she or he executes it on the structure’s nominal (so she or he is long a call). If it is in our favour we execute it on twice the nominal (so we are long a put on twice the nominal). And if the sum of the fixings in favour of the counterpart, \max( S_i - K, 0) with S_i denoting the fx fixing, exceeds a given target, the remaining forwards expire without further payments. In such structures there are several usances for the coupon fixing which triggers the target: Either the full amount for this fixing is paid, or only part of the coupon necessary to reach the target, or no coupon at all.

The valuation of fx tarfs in general depend on the fx smiles for each component’s fixing. The whole smile is important here: Both the strike of the trade and the target minus the accumulated amount are critical points on the smile obviously. Since the accumulated amount is itself a random quantity after the first fixing, the whole smile will affect the structure’s value. In addition the intertemporal correlations of the fx spot on the fixing dates play an important role.

In this and probably one or more later posts I want to write about several things:

  • how a classic, fully fledged monte carlo pricing engine can be implemented for this structure
  • how an approximate npv for market scenarios and time decay assumptions can be calculated very quickly
  • how this can be implemented in a second pricing engine and how this is related to the first engine
  • how QuantLib’s lucent-transparent design can be retained when doing all this

Obviously fast pricing is useful to fill the famous npv cube which can then be used to calculate XVA numbers like CVA, DVA etc.

Today’s post is dedicated to some thoughts on the methodology for fast, approximate pricings. I am heavily inspired by a talk of some Murex colleagues here who implemented similar ideas in their platform for CVA and potential future exposure calculations. Moreover the idea is related to the very classic and simple, but brilliant paper by Longstaff and Schwartz Valuing American Options by Simulation: A Simple Least-Squares Approach, but has a slightly different flavour here.

Let’s fix a specific tarf termsheet. The structure has payment dates starting on 15-Nov-2014, then monthly until 15-Oct-2015. The fx fixing is taken to be the ECB fixing for EUR-USD two business days prior to each payment date. The nominal is 100 million Euro. Our counterpart holds the calls, we the puts and our puts are on 200 million Euro so leveraged by a factor of two. The strike is 1.10, so the counterpart’s calls were in the money at trade date.

The valuation date is 28-Apr-2015 and the remaining target is 0.10. The fx spot as of the valuation date is 1.10. The implied volatility for EUR-USD fx options is 20% (lognormal, not yet a problem in fx markets 😉 …), constant over time and flat and we assume equal Euro and USD interest rates, so no drift in our underlying Garman-Kohlagen process. The payoff mode is full coupon. Of course the assumptions on market data are only partly realistic.

The idea to approximate npvs efficiently is as follows. First we do a full monte carlo pricing in the usual way. Each path generates a npv. We store the following information on each grid point of each path

( # open Fixings , fx spot, accumulated amount so far , npv of the remaining fixings )

The hope is then that we can do a regression analysis of the npv on these main price drivers, i.e.fx spot, the already accumulated amount and the number of open fixings.

Note that this approach implies that the fx spot is taken from the “outside” XVA scenario set, but everything else (the interest rate curves and the volatility) is implied by the pricing model. This is slightly (or heavily) inconsistent with a XVA scenario set where rate curves and maybe also the volatility structure is part of the scenarios.

Let’s fix the simplest case of only one open fixing left, i.e. we put ourselves at a point in time somewhere after the second but last and the last fixing. Also we set the target to +\infty (i.e. we ignore that feature) for the time being and assume a leverage of one. Our structure collapses to a single, vanilla fx forward. We do 250k monte carlo paths and plot the results (the npv is in percent here):

tarf1

Do you see what is happening ? We get a point cloud that – for fixed accumulated amount – conditioned on the spot averages averages to a line representing the fx forward npv. See below where I do this in 2d and where it gets clearer. What we note here as a first observation is that the position of the cloud depends on the accumulated amount: Lower spots are connected with lower accumulated amounts and higher spots with higher accumulated amounts. This is quite plausible, but has an impact on the areas where we have enough data to do a regression.

The next picture shows the same data but projecting along the accumulated amount dimension.

tarf2

Furthermore I added a linear regression line which should be able to predict the npv given a spot value. To test this I added three more horizontal lines that estimate the npv for spot values of 1.0, 1.1 and 1.2 by averaging over all generated monte carlo data within buckets [0.99,1.01], [1.09,1.11] and [1.19,1.21] respectively. The hope is that the horizontal lines intersect with the regression line at x-values of 1.0, 1.1 and 1.2. This looks quite good here.

Let’s look at a real TARF now, i.e. setting the target to 0.15.

tarf3

What is new here is that the cloud is cut at the target level, beyond collapsing simply to a plane indicating a zero npv. Quite clear, because in this area the structure is terminated before the last fixing.

Otherwise this case is not too different from the case before since we assume a full coupon payment and only have one fixing left, so we have a fx forward that might be killed by the target trigger before. More challenging is the case where we pay a capped coupon. Excluding data where the target was triggered before, in this case we get

tarf4

We want to approximate npvs for spots 1.0, 1.1 and 1.2 and an accumulated amount of 0.05 (bucketed by 0.04 to 0.06) now. The target feature introduces curvature in our cloud. I take this into account by fitting a quadratic polynominal instead of only a linear function.

Furthermore we see that the npvs are limited to 15 now and decreasing for higher spots. Why this latter thing ? Actually until now I only used the fixing times as simulation times (because only they are necessary for pricing and the process can take large steps due to its simplicity), so the spot is effectively the previous fixing always. And if this is above 1.1 it excludes the possibility for higher coupons than its difference to 1.1.

Let’s add more simulation times between the fixings (100 per year in total), as it is likely to be the case in the external XVA scenarios asking for npvs in the end:

tarf5

The approximation works quite ok for spot 1.0 but not to well for 1.1 and 1.2 any more (in both cases above). Up to now we have not used the accumulated amount in our npv approximation. So let’s restrict ourselves to accumulated amounts of e.g. 0.02 to 0.08 (remember that we want a prediction conditioned on an accumulated amount of 0.05, I choose a bigger bucket for the regression though to have “more” data and because I think I don’t want to compute too many regression functions on two little data sets in the end).

tarf6

Better. Let’s move to the original termsheet now (leverage 2, remaining target 0.1, full coupon) and to 5 open fixings instead of only 1 to see if all this breaks down in a more complex setting (my experience says, yes, that will happen). The accumulated amount we want to approximation for is now 0.01:

tarf7

Quite ok, puh. We see a new artefact now however: The quadratic regression function starts to fall again for spots bigger than 1.25. This is of course not sensible. So we have the necessity not only to compute different regression functions for different accumulated amounts (and different numbers of open fixings), but also for different spot regions. Let’s compute another quadratic regression for spots bigger than 1.2 for example (the blue graph):

tarf8

That would work for higher spots.

To summarize the experiments, the approach seems sensible in general, but we have to keep in mind a few things:

The number of time steps in the simulation should be larger than for pure pricing purposes, possibly the grid’s step size should be comparable to the XVA simulation.

The regression function can be assumed to be quadratic, but not globally. Instead the domain has to be partitioned by

  • the number of open fixings, possibly even
  • the number of open fixings and the distance to the last fixing,
  • the already accumulated amount
  • the fx spot

The next task would be to think of an algorithm that does a sensible partition automatically. One idea would be to require just a certain minimum percentage of the data generated by the initial monte carlo simulation for pricing available in each partition.

Next post will be on the implementation of the two pricing engines then !

Fast Pricing of FX TARFs for XVA Calcuation