Pricing Engine Design for exotic XVA simulations (using the example of FX TARFs)

In my previous post I wrote about some ideas to efficiently approximate the value of a fx exotic (a fx tarf in fact). One main motivation is to use such a fast pricing in a XVA simulation.

tarf3

This post is dedicated to the design I came up with to fit the idea as accurately as possible into the existing QuantLib architecture.

The next post will then present details about the approximation scheme itself, some numerical examples comparing the approximation with a full pricing under several market and time decay scenarios, and performance tests.

Good design is of utmost importance. Even in a prosperous neighbourhood a broken window, not swiftly repaired, will soon lead to a second and after a while a degenerate area. Luckily Luigi would not allow this to happen in QuantLib city of course.

These are my thoughts on the design in general:

  • we should have two pricing engines, one MC and one Proxy engine. We should not have only one engine with some additional methods to extract the approximate NPVs in a proprietary way
  • the proxy engine should behave just like any other engine, e.g. the time decay should consistently be deduced from the global evaluation date, not through some special parameter, and the relevant market data should be given by standard structures
  • the implementation of new instruments and pricing engines following the same idea should be easy and be based on a common interface
  • the end user interface used in client code should be easy to use and foolproof
  • XVA simulation is an application of the proxy engine, but there is no strict connection to this use case – you can also use the proxy engine “just” to compute npvs faster if high accuracy is not required

Or in short: There should be nothing special about the whole thing. Curb your enthusiasm, just do a plain solid job, don’t try to impress anyone. Okay. Let’s start with the instrument class.

FxTarf(const Schedule schedule, 
       const boost::shared_ptr<FxIndex> &index,
       const Real sourceNominal,
       const boost::shared_ptr<StrikedTypePayoff> &shortPositionPayoff,
       const boost::shared_ptr<StrikedTypePayoff> &longPositionPayoff,
       const Real target, 
       const CouponType couponType = capped,
       const Real shortPositionGearing = 1.0,
       const Real longPositionGearing = 1.0,
       const Handle<Quote> accumulatedAmount = Handle<Quote>(),
       const Handle<Quote> lastAmount = Handle<Quote>());

The constructor takes

  • the structure’s schedule (meaning the value dates),
  • the underlying fx index representing for example an ECB fx fixing (this is a new class too, because there doesn’t seem to be a fx index in QuantLib yet, but I do not go into details about that here),
  • the nominal of the structure (in foreign, asset or source currency, there are many names for that)
  • the counterparties’ payoff profiles, which are normally short puts and long calls, all sharing the same strike
  • the target level where the structure knocks out
  • the coupon type (see the previous post I linked above)
  • the gearing of the two sides of the deal

The last two parameters accumulatedAmount and lastAmount are optional. If not given, the FxTarf computes the already accumulated amount reading the index’s historic fixings.

If specified on the other hand, historic fixings are ignored and the given accumulated amount is used. The lastAmount in this context is needed only in case the last fixing already occured, but the associated payment is still in the future. The reason to introduce these somewhat redundant parameters is as follows. On one hand it may be handy not to set all historic fixings for the fx index, but directly set the accumulated amount. Maybe you get the deal information from some source system and it already provides the current accumulated amount along with the deal data. More importantly, during a XVA simulation you might not want to set all fixings in the IndexManager. You can do that, if you like, but it may not be handy, because after each path you have to erase all fixings again or maybe you do not simulate each fixing date even and just want to interpolate the accumulated fixings. In any case it is just a convenience parameter. Use it or just ignore it.

For a full pricing of the tarf we can use a monte carlo engine which is constructed in the usual way, for example

boost::shared_ptr<PricingEngine> mcEngine 
    = MakeMcFxTarfEngine<>(gkProcess)
      .withStepsPerYear(steps)
      .withSamples(samples)
      .withSeed(42)
      .withProxy();

The parameters here are the (generalized) Black Scholes process, the time steps to simulate per year, the seed for the RNG.

The last modifier .withProxy() is the only special thing here: By default the engine will just compute an npv (and error estimate) as any other mc engine does. If the engine is set up with the proxy flag on the other hand, additional information during the simulation is collected and analyzed to produce some proxy information object that can be used later for approximate pricings. We will see in a minute how.

We need this modifier because the simulation gets slower when creating the proxy, so we should be able to switch it off.

Now we set the engine and can compute the (full) npv:

tarf.setPricingEngine(mcEngine);
std::cout << "MCEngine NPV = " << tarf.NPV() << " error estimate = "
          << tarf.errorEstimate() << std::endl;

If we just want the proxy pricing, we can of course skip the full npv calculation, but it doesn’t hurt since the npv is always produced, even if only calculating the information for the proxy engine.

To use the proxy engine we have to start with an engine which can produce this piece of information. The proxy engine is then feeded with the proxy object:

boost::shared_ptr<PricingEngine> proxyEngine =
    boost::make_shared<ProxyFxTarfEngine>(tarf.proxy(), 
                                          fxQuote, 
                                          gkProcess->riskFreeRate());
tarf.setPricingEngine(proxyEngine);
std::cout << "ProxyEngine NPV = " << tarf.NPV() << std::endl;

The proxy engine is constructed with

  • the proxy description produced by the mc engine, which can be retrieved from the instrument by .proxy() (this is a special result, which seems important enough not to bury it in the additional results heap — this is nothing innovative though, it’s like the bps or fair rate for swaps as an example)
  • the fx quote used for the pricing – i.e. the essential market data needed for the proxy pricing
  • the discount curve for the pricing (which is taken as the domestic rate curve from our black scholes process in our example client code)

In addtion the global evaluation date will determine the reference date for the valuation, just as usual.

If the instrument does not provide a proxy object (e.g. because the mc engine was told so, see above), or if the proxy object is not suitable for the proxy engine to be constructed, an exception will be thrown.

What is going on behind the scenes: I added an interface for instruments that are capable of proxy pricing:

class ProxyInstrument {
  public:
    //! Base class proxy descriptions for approximate pricing engines
    struct ProxyDescription {
        // check if proxy description is valid
        virtual void validate() const = 0;
    };

    virtual boost::shared_ptr<ProxyDescription> proxy() const = 0;
};

The only method to implement is proxy which should return an object containing the information necessary for a compatible proxy engine to compute approximate npvs (see below for what compatible means). The information itself should be an object derived from ProxyInstrument::ProxyDescription. It has to provide a validate method that should check the provided data for consistency.

How is the fx tarf instrument implemented w.r.t. this interface:

class FxTarf : public Instrument, public ProxyInstrument {
  public:
    //! proxy description
    struct Proxy : ProxyDescription {
        struct ProxyFunction {
            virtual Real operator()(const Real spot) const = 0;
        };
        // maximum number of open fixings
        Size maxNumberOpenFixings;
        // last payment date, the npvs are forward npvs w.r.t. this date
        Date lastPaymentDate;
        // buckets for accumulated amonut, e.g.
        // 0.0, 0.1, 0.2, 0.3, 0.4 means
        // [0.0,0.1) has index 0
        // [0.1,0.2) has index 1
        // ...
        // [0.4,target] has index 4
        std::vector<Real> accBucketLimits;
        // proxy functions
        // first index is openFixings-1
        // second index is accAmountIndex
        // A function F should implement
        // operator()(Real spot) = npv
        std::vector<std::vector<boost::shared_ptr<ProxyFunction> > > functions;
        void validate() const {
            QL_REQUIRE(functions.size() == maxNumberOpenFixings,
                       "maximum number of open fixings ("
                           << maxNumberOpenFixings
                           << ") must be equal to function rows ("
                           << functions.size() << ")");
            for (Size i = 0; i < functions.size(); ++i) {
                QL_REQUIRE(functions[i].size() == accBucketLimits.size(),
                           "number of acc amount buckets ("
                               << accBucketLimits.size()
                               << ") must be equal to function columns ("
                               << functions[i].size() << ") in row " << i);
            }
        }
    };
/* ... */

This says that the specific (or one possible) proxy information of a fx tarf consists of some descriptive data, which is

  • the maximum number of open (future) fixings
  • the last payment date of the structure – see below
  • a bucketing of the accumulated amount

which together define a segmentation for the approximating function. On each segment the approximating function is then given by a Proxy::ProxyFunction which is at this level of abstraction just an arbitrary function Real to Real, returning the npv for a given fx spot. The npv is expressed on forward basis as of the last payment date of the structure, so that the proxy engine can discount back from this latest possible pricing date to the evaluation date. Remember that this is in general (and typically) different from the one used for the mc pricing.

The validate method checks if the function matrix is filled consistently with the segmentation information.

The proxy object is part of the results class for the instrument, which is again just using the standard formalism:

class FxTarf::results : public Instrument::results {
  public:
    void reset();
    boost::shared_ptr<FxTarf::Proxy> proxy;
};

The proxy engine expects a proxy object, which is checked to be one that is useful for engine is in the constructor, using a dynamic downcast.

ProxyFxTarfEngine(
    boost::shared_ptr<ProxyInstrument::ProxyDescription> proxy,
    Handle<Quote> exchangeRate, Handle<YieldTermStructure> discount)
    : FxTarfEngine(discount), exchangeRate_(exchangeRate) {
    registerWith(exchangeRate_);
    proxy_ = boost::dynamic_pointer_cast<FxTarf::Proxy>(proxy);

    QL_REQUIRE(proxy_, "no FxTarf::Proxy given");

}

The third level of specialization is in the monte carlo engine, where the specific proxy object’s function type is implemented, derived from the definitions in the FxTarf class:

template <class RNG = PseudoRandom, class S = Statistics>
class McFxTarfEngine : public FxTarfEngine,
                       public McSimulation<SingleVariate, RNG, S> {
  public:
    /*! proxy function giving a function spot => npv for one segment
        (bucket accumulated amount, number of open fixings)
        the function is given by two quadratic polynomials on intervals
        (-\infty,cutoff] and (cutoff,\infty).
        Only the ascending (long calls) or descending (long puts) branch
        is used and then extrapolated flat.
    */
    class QuadraticProxyFunction : public FxTarf::Proxy::ProxyFunction {
      public:
        QuadraticProxyFunction(Option::Type, const Real cutoff, const Real a1,
                               const Real b1, const Real c1, const Real a2,
                               const Real b2, const Real c2);
        Real operator()(const Real spot) const;

      private:
        Option::Type type_;
        const Real a1_, b1_, c1_, a2_, b2_, c2_;
        const Real cutoff_;
        int flatExtrapolationType1_,
            flatExtrapolationType2_; // +1 = right, -1 = left
        Real extrapolationPoint1_, extrapolationPoint2_;
    };

At this place we finally fix the specific form of the approximating functions, which are in essence given by two quadratic polynomials. I will give more details on and a motiviation for this in the next post.

Finally it appeared useful to derive the mc and proxy engines from a common base engine, which handles some trivial boundary cases (like all fixings are done or the determination of the npv of a fixed amount that is not yet settled), so we have the hierachy

class FxTarfEngine : public FxTarf::engine {}

template <class RNG = PseudoRandom, class S = Statistics>
class McFxTarfEngine : public FxTarfEngine,
                       public McSimulation<SingleVariate, RNG, S> {}

class ProxyFxTarfEngine : public FxTarfEngine {}

If you are interested in the code you can look into my repository. It may already work, but I did not test everything yet. More on this next week.

Pricing Engine Design for exotic XVA simulations (using the example of FX TARFs)

Fast Pricing of FX TARFs for XVA Calcuation

Let’s play a game. We flip a coin ten times (here is a particularly nice way to do this – you can take the Greek 2 Euro coin for example, it has Εὐρώπη (both the goddess and the continent) on it). If it is heads I pay you one Euro. If it is tails you pay me two. Oh and if you should win more than three times while we are playing we just stop the whole thing, ok ?

A fx tarf is a sequence of fx forward trades where our counterpart pays a strike rate. If the single forward is in favour of the counterpart she or he executes it on the structure’s nominal (so she or he is long a call). If it is in our favour we execute it on twice the nominal (so we are long a put on twice the nominal). And if the sum of the fixings in favour of the counterpart, \max( S_i - K, 0) with S_i denoting the fx fixing, exceeds a given target, the remaining forwards expire without further payments. In such structures there are several usances for the coupon fixing which triggers the target: Either the full amount for this fixing is paid, or only part of the coupon necessary to reach the target, or no coupon at all.

The valuation of fx tarfs in general depend on the fx smiles for each component’s fixing. The whole smile is important here: Both the strike of the trade and the target minus the accumulated amount are critical points on the smile obviously. Since the accumulated amount is itself a random quantity after the first fixing, the whole smile will affect the structure’s value. In addition the intertemporal correlations of the fx spot on the fixing dates play an important role.

In this and probably one or more later posts I want to write about several things:

  • how a classic, fully fledged monte carlo pricing engine can be implemented for this structure
  • how an approximate npv for market scenarios and time decay assumptions can be calculated very quickly
  • how this can be implemented in a second pricing engine and how this is related to the first engine
  • how QuantLib’s lucent-transparent design can be retained when doing all this

Obviously fast pricing is useful to fill the famous npv cube which can then be used to calculate XVA numbers like CVA, DVA etc.

Today’s post is dedicated to some thoughts on the methodology for fast, approximate pricings. I am heavily inspired by a talk of some Murex colleagues here who implemented similar ideas in their platform for CVA and potential future exposure calculations. Moreover the idea is related to the very classic and simple, but brilliant paper by Longstaff and Schwartz Valuing American Options by Simulation: A Simple Least-Squares Approach, but has a slightly different flavour here.

Let’s fix a specific tarf termsheet. The structure has payment dates starting on 15-Nov-2014, then monthly until 15-Oct-2015. The fx fixing is taken to be the ECB fixing for EUR-USD two business days prior to each payment date. The nominal is 100 million Euro. Our counterpart holds the calls, we the puts and our puts are on 200 million Euro so leveraged by a factor of two. The strike is 1.10, so the counterpart’s calls were in the money at trade date.

The valuation date is 28-Apr-2015 and the remaining target is 0.10. The fx spot as of the valuation date is 1.10. The implied volatility for EUR-USD fx options is 20% (lognormal, not yet a problem in fx markets 😉 …), constant over time and flat and we assume equal Euro and USD interest rates, so no drift in our underlying Garman-Kohlagen process. The payoff mode is full coupon. Of course the assumptions on market data are only partly realistic.

The idea to approximate npvs efficiently is as follows. First we do a full monte carlo pricing in the usual way. Each path generates a npv. We store the following information on each grid point of each path

( # open Fixings , fx spot, accumulated amount so far , npv of the remaining fixings )

The hope is then that we can do a regression analysis of the npv on these main price drivers, i.e.fx spot, the already accumulated amount and the number of open fixings.

Note that this approach implies that the fx spot is taken from the “outside” XVA scenario set, but everything else (the interest rate curves and the volatility) is implied by the pricing model. This is slightly (or heavily) inconsistent with a XVA scenario set where rate curves and maybe also the volatility structure is part of the scenarios.

Let’s fix the simplest case of only one open fixing left, i.e. we put ourselves at a point in time somewhere after the second but last and the last fixing. Also we set the target to +\infty (i.e. we ignore that feature) for the time being and assume a leverage of one. Our structure collapses to a single, vanilla fx forward. We do 250k monte carlo paths and plot the results (the npv is in percent here):

tarf1

Do you see what is happening ? We get a point cloud that – for fixed accumulated amount – conditioned on the spot averages averages to a line representing the fx forward npv. See below where I do this in 2d and where it gets clearer. What we note here as a first observation is that the position of the cloud depends on the accumulated amount: Lower spots are connected with lower accumulated amounts and higher spots with higher accumulated amounts. This is quite plausible, but has an impact on the areas where we have enough data to do a regression.

The next picture shows the same data but projecting along the accumulated amount dimension.

tarf2

Furthermore I added a linear regression line which should be able to predict the npv given a spot value. To test this I added three more horizontal lines that estimate the npv for spot values of 1.0, 1.1 and 1.2 by averaging over all generated monte carlo data within buckets [0.99,1.01], [1.09,1.11] and [1.19,1.21] respectively. The hope is that the horizontal lines intersect with the regression line at x-values of 1.0, 1.1 and 1.2. This looks quite good here.

Let’s look at a real TARF now, i.e. setting the target to 0.15.

tarf3

What is new here is that the cloud is cut at the target level, beyond collapsing simply to a plane indicating a zero npv. Quite clear, because in this area the structure is terminated before the last fixing.

Otherwise this case is not too different from the case before since we assume a full coupon payment and only have one fixing left, so we have a fx forward that might be killed by the target trigger before. More challenging is the case where we pay a capped coupon. Excluding data where the target was triggered before, in this case we get

tarf4

We want to approximate npvs for spots 1.0, 1.1 and 1.2 and an accumulated amount of 0.05 (bucketed by 0.04 to 0.06) now. The target feature introduces curvature in our cloud. I take this into account by fitting a quadratic polynominal instead of only a linear function.

Furthermore we see that the npvs are limited to 15 now and decreasing for higher spots. Why this latter thing ? Actually until now I only used the fixing times as simulation times (because only they are necessary for pricing and the process can take large steps due to its simplicity), so the spot is effectively the previous fixing always. And if this is above 1.1 it excludes the possibility for higher coupons than its difference to 1.1.

Let’s add more simulation times between the fixings (100 per year in total), as it is likely to be the case in the external XVA scenarios asking for npvs in the end:

tarf5

The approximation works quite ok for spot 1.0 but not to well for 1.1 and 1.2 any more (in both cases above). Up to now we have not used the accumulated amount in our npv approximation. So let’s restrict ourselves to accumulated amounts of e.g. 0.02 to 0.08 (remember that we want a prediction conditioned on an accumulated amount of 0.05, I choose a bigger bucket for the regression though to have “more” data and because I think I don’t want to compute too many regression functions on two little data sets in the end).

tarf6

Better. Let’s move to the original termsheet now (leverage 2, remaining target 0.1, full coupon) and to 5 open fixings instead of only 1 to see if all this breaks down in a more complex setting (my experience says, yes, that will happen). The accumulated amount we want to approximation for is now 0.01:

tarf7

Quite ok, puh. We see a new artefact now however: The quadratic regression function starts to fall again for spots bigger than 1.25. This is of course not sensible. So we have the necessity not only to compute different regression functions for different accumulated amounts (and different numbers of open fixings), but also for different spot regions. Let’s compute another quadratic regression for spots bigger than 1.2 for example (the blue graph):

tarf8

That would work for higher spots.

To summarize the experiments, the approach seems sensible in general, but we have to keep in mind a few things:

The number of time steps in the simulation should be larger than for pure pricing purposes, possibly the grid’s step size should be comparable to the XVA simulation.

The regression function can be assumed to be quadratic, but not globally. Instead the domain has to be partitioned by

  • the number of open fixings, possibly even
  • the number of open fixings and the distance to the last fixing,
  • the already accumulated amount
  • the fx spot

The next task would be to think of an algorithm that does a sensible partition automatically. One idea would be to require just a certain minimum percentage of the data generated by the initial monte carlo simulation for pricing available in each partition.

Next post will be on the implementation of the two pricing engines then !

Fast Pricing of FX TARFs for XVA Calcuation