The Chambers-Nawalkha Formula

This is about implied volatility. Which can for example be found as \sigma in the Black76 process

dF = \sigma F dW

with an underlying forward rate F and a brownian motion W. It is this \sigma which is often used to express a vanilla option price because is normalizes out the dependency on expiry and strike in a certain way.

It is the same \sigma that makes trouble for caps and swaptions in Euro nowadays because it also rules out negative forwards and tends to explode for low interest rate levels. But there are workarounds for this like normal volatilities (remove the F on the rhs of the equation above) or shifted lognormal volatilities (replace F by F+\alpha for a constant \alpha \geq 0). I will write more about this in a future post.

Today I focus on the implied aspect. This means you start with a price and ask for the \sigma giving this price in the Black76 model.

Why is that important ? Because implied volatilities are something that traders want to see. Or you want to use them for interpolation rather than directly the prices. Or your trading system accepts only them as market data input. Also there are useful models (or model resolutions) that work directly with volatilities, like the SABR-Hagan-2002 model or the SVI model. On the other hand there are sometimes only prices available in the first place. Could be market quotes. Could also be other models that produce premiums and not directly volatilities like other resolutions for the SABR model.

I had this issue several times already. An example is the KahaleSmileSection class which can be used to check a smile for arbitrage and replace the defective regions by an arbitrage free extrapolation. This class works in the call price space. You can retrieve implied volatilities from it, but for this I needed to use

 try {
     Option::Type type = strike >= f_ ? Option::Call : 
     vol = blackFormulaImpliedStdDev(
            type, strike, f_,
            type == Option::Put ? strike - f_ + c : c) /
} catch (...) { /* ... */}

which has c, the call price, as an input and converts that to an implied volatility using the library’s function blackFormulaImpliedStdDev. This function uses a numerical zero search and the usual “forward” black formula to find the implied volatility. With the usual downsides. It would be nicer to have a closed form solution for the implied volatility!

Another example is the No arbitrage SABR model by Paul Doust that also produces option prices and not directly volatilities. Or of course the ZABR model which we already had on this blog here and which has premium outputs at least for some of its resolutions.

On Wilmott forums someone said

there actually is a closed-form inverted Black-Scholes formula due to Bourianochevsky and Jarulamparabalam … but I’ve never seen it since it’s proprietary …

Hmm googling the two names gives exactly one result (the Wilmott thread) although they sound familiar. But he said proprietary didn’t he. And later, the same guy,

oh, and they also claim they have a closed-form formula for the American put option … again proprietary.

Also very cool. Seriously, on the same thread there is a reference to this nice paper: Can There Be an Explicit Formula for Implied Volatility?. A hint that things are difficult to say the least.

What really counts at the end of the day is what is available in QuantLib. There is blackFormulaImpliedStdDevApproximation. Which is also among the longest function names in the library. This uses (citing from the source code)

Brenner and Subrahmanyan (1988) and Feinstein (1988) approximation for at-the-money forward option, with the extended moneyness approximation by Corrado and Miller (1996)

Quite some contributors. However it does not work too well, at least not in all cases. Look at a SABR smile with \alpha=0.04, \beta=0.5, \nu=0.15, \rho=-0.5, a forward of 0.03 and time to expiry 30 years.


It is surprisingly easy to improve this following a paper by Chambers and Nawalkha, “An improved Approach to Computing Implied Volatility” (The Financial Review 38, 2001, 89-100). The cost for the improvement is that one needs one more input, namely the atm price. But this is not too bad in general. If you have prices from a model that you want to convert into implied volatilities, you can produce the atm price, no problem then. Also if you have market quotes you will often have the atm quote available because this is usually the most liquid one.

What they do is

  • compute an approximate atm volatility from the given atm price
  • use this to reprice the option in question on the atm volatility level
  • compute the vega difference between atm and the requested volatility by a second order Taylor expansion

For the first step they do the same as in blackFormulaImpliedStdDevApproximation, which is to apply the so called Brenner and Subrahmanyan formula to get the implied atm volatility from the given atm option price.

This formula is very very easy: Freeze the forward at time zero in the Black dynamics to get a normal distributed forward at option expiry. Then integrate the option payoff, which can be done explicitly in this case using school math. This gives

E( (F(t)-K)^+ ) \approx F(0)\sigma\sqrt t / (2\pi)

so that the atm volatility can be easily computed from the atm option price. I hear some people find it “deep” that \pi appears in this equation relating option prices and the implied volatility. A sign of a transcendental link between option markets and math. I don’t find that deep, but who am I to judge.

The second step is easy, just an application of the forward Black formula.

For the third step we have (star denoting atm volatility, subscripts denoting derivatives)

c(K) - c^*(K) = c_\sigma(K) (\sigma - \sigma^*) + \frac{1}{2} c_{\sigma\sigma}(K) (\sigma - \sigma^*)^2 + ...

This is a quadratic equation which can readily be solved for (\sigma - \sigma^*).

You can find working code in my master branch The name of the function is blackFormulaImpliedStdDevApproximationChambers and it is part of the file blackformula.cpp. Remember to add the compiler flag -fpermissive-overlong-identifiers during the configure step.

Let’s try it on the same data as above.


Better! Also the resulting smile shape stays natural even in regions which deviates a bit more from the original smile.

Have a very nice weekend all and a good new week.

The Chambers-Nawalkha Formula

Interlude: SFINAE tricks

Hello again. This week I want to write about a point which remained open in a previous post . Jan Ladislav Dussek already gave a solution in response to the post. Thanks for that! However I believe the technology behind is quite fancy so worth a post in its own right.

What are the news apart from that? There is a second project adding AD capabilities to QuantLib following a different approach than described earlier in this blog. The idea is to just replace the Real datatype by active AD types at compile time and use them throughout all calculations. This only requires minimal code changes compared to the template approach.

The downside is that you can not mix AD and non-AD calculations during runtime which might have an impact on performance and / or memory footprint of your application.

My personal experiments show a 20 – 30 % slowdown in some simple examples but maybe this can be optimized. Or maybe this isn’t too bad and a small price to pay in comparison to the work needed to convert large parts of the codebase. Another question is whether the templated code can or should be merged into the official branch earlier or later. If not it would be quite cumbersome to keep an adjoint branch up to date with current developments. But not impossible.

Anyway it is interesting to note that the work on the new approach was initiated by Alexander Sokol and his company CompatibL. So quite a competitor. Like running a marathon with Dennis Kimetto (he ran a new world record in 2:02:57 last year in Berlin, a time in which I can safely run half the way, well a bit more but not much). Why not. We are still on track, aren’t we Cheng (whom I thank for his tireless contributions). 🙂

Back to the main point of this post. I first restate the problem in a bit more abtract way. We have the following classes on stage

template <class T> class BaseCouponPricer {};
template <class T> class DerivedCouponPricer : 
                      public BaseCouponPricer<T> {};

where the base class in QuantLib is really FloatingRateCouponPricer and the derived class something like CmsCouponPricer or IborCouponPricer or the like.

Now we have a utility function assigning coupon pricers to coupons

template <class T>
void setCouponPricer(const boost::shared_ptr<BaseCouponPricer<T> > &) {

taking a shared pointer to some pricer derived from our base class. Without the template stuff this is not a problem because there is an implicit conversion defined from a shared pointer to D to a shared point to B if D inherits from B. Although – and this is important to note – there is no inheritance relationship between the shared pointers themselves implied. And this is the whole source of the problem because user code like this

boost::shared_ptr<BaseCouponPricer<double> > basePricer;
boost::shared_ptr<DerivedCouponPricer<double> > derivedPricer;

will result in a compiler error like this

testTempl.cpp:35:34: error: no matching function for call to ‘setCouponPricer(boost::shared_ptr<DerivedCouponPricer<double> >&)’
testTempl.cpp:35:34: note: candidate is:
testTempl.cpp:12:6: note: template<class T> void setCouponPricer(const boost::shared_ptr<BaseCouponPricer<T> >&)
 void setCouponPricer(const boost::shared_ptr<BaseCouponPricer<T> > &) {
testTempl.cpp:12:6: note:   template argument deduction/substitution failed:
testTempl.cpp:35:34: note:   mismatched types ‘BaseCouponPricer<T>’ and ‘DerivedCouponPricer<double>’
testTempl.cpp:35:34: note:   ‘boost::shared_ptr<DerivedCouponPricer<double> >’ is not derived from ‘const boost::shared_ptr<BaseCouponPricer<T> >’

where the essential message is in the last line. This was produced with gcc, clang is a bit less chatty (which is a good quality in general)

testTempl.cpp:35:5: error: no matching function for call to 'setCouponPricer'
testTempl.cpp:12:6: note: candidate template ignored: could not match 'BaseCouponPricer' against

The implicit conversion mentioned above does not help here because it is not taken into consideration by the compiler’s template substitution machinery.

The idea to solve this is to make the implementation more tolerant, like this

template <template<class> class S, class T>
void setCouponPricer(const boost::shared_ptr<S<T> > &) {

now taking a shared pointer to any type S which itself takes a template parameter T. This compiles without errors. But now the door is open for code that do not make sense because S could be something completely different from a pricer.

What we need is some check at compile time for class inheritance. The boost type traits library (or since C++11 the std library itself) has something for this, namely we can write

bool isBase = boost::is_base_of<BaseCouponPricer<double>, 
                                DerivedCouponPricer<double> >::type::value;

Here we use generic template programming which is something like programming with types instead of values and where the function evaluation is done during compilation, not at run time. The function we use here is boost::is_base_of which takes two arguments and the return value of this function is itself a type which we can retrieve by ::type.

Since there is no such thing like a function taking type arguments and returning a type in C++ (and which in addtion is evaluated at compile time), ordinary struct‘s are used for this purpose taking the input values as template parameters and providing the return value as a typedef within the struct assigning the label type to the actual return value which is, remember, a type.

In our specific case we are expecting a boolean as the return value so the return type has to represent true or false, and it does. Actually there is a wormhole from the meta programming to the regular C++ space, yielding an usual bool, and this is via ::value which unpacks a boolfrom its meta space wrapper which stores it just as a static constant.

If on the other hand we write

bool isBase = boost::is_base_of<BaseCouponPricer<double>, 
                                UnrelatedClass<double> >::type::value;

for an unrelated class (i.e. a class not derived from BaseCouponPricer) we get isBase = false.

But we are not done, the best part is yet to come. I just begin with the end result because I don’t have a clue how people come up with these kind of solutions and am reluctant to think of a motivating sentence

template <template <class> class S, class T>
void setCouponPricer(
    const boost::shared_ptr<S<T> > &,
    typename boost::enable_if<
        boost::is_base_of<BaseCouponPricer<T>, S<T> > >::type *dummy = 0) {

What I usually do when I see code like this for the first time is sit there for a few hours and just stare at it. And then after a while you see the light:

We add a second parameter to our function which is defaulted (to 0, doesn’t matter) so that we do not need to specify it. The only reason for this second parameter is to make sense or not to make sense, in the latter case just removing the function from the set of candidates the compiler considers as an template instantiation for the client code. What I mean by not ot make sense becomes clearer when looking at the implementation of boost::enable_if (I took this from the 1_57_0 distribution):

  template <bool B, class T = void>
  struct enable_if_c {
    typedef T type;

  template <class T>
  struct enable_if_c<false, T> {};

  template <class Cond, class T = void> 
  struct enable_if : public enable_if_c<Cond::value, T> {};

The function (struct) enable_if takes one input parameter, namely Cond standing for condition. This parameter is a type since we are doing meta programming but the implementation derefences the value via ::value just as we did above for testing and passing it to the implementing function enable_if_c. Passing means inheritance here, we are programming with struct‘s (with difficult topics I tend to repeat myself, bear with me please).

The second parameter T defaulted to void does not play any important role, it could just be any type (at least in our application here). The return value from enable_if_c is this type if the input parameter is true or nothing if the input parameter is false. This is implemented by partial template specialization, returning (by declaring a typdef for) T = void in the general definition but nothing in the specialization for B = false.

Now if nothing is what we get as the return value ::type, i.e. there is no typedef at all for the return value, the expression for our second dummy parameter from above

typename boost::enable_if<
        boost::is_base_of<B,D> >::type *dummy = 0

does not make sense since without a ::type we can not declare a pointer to it. If type is void on the other hand the expression makes perfect sense. One could add that a non-existent ::type in the meta programming space is what void is in the regular programming space. Not very helpful and just to confuse everyone a bit.

The first case when ::type is void is clear now, the function is just taken by the compiler to instantiate the client code function. And this is the case if our earlier condition checking for the inheritance relationship is true.

If this is false on the other hand a senseless expression is born during the template substitution process and therefore discarded by the compiler for further use. This is also know as SFINAE which stands for substitution failure is not an error, you can read more about that here.

The test code for the whole thing is as follows

boost::shared_ptr<UnrelatedClass<double> > noPricer;
boost::shared_ptr<BaseCouponPricer<double> > basePricer;
boost::shared_ptr<DerivedCouponPricer<double> > derivedPricer;

setCouponPricer(noPricer); // compile error
setCouponPricer(basePricer); // ok
setCouponPricer(derivedPricer); // ok

This is all a bit crazy and sounds like exploiting undocumented language features, not meant for those purposes. And probably it was in the beginning when meta programming was discovered. But it’s obviously widely used nowadays and even necessary to comply with the already classic C++11 standard.

Which we just discussed yesterday when talking about C++11 – compliant random number generators. Here is an interesting one by Thijs van den Berg. In chapter / table 117 of the C++11 standard n3242 it says that you must provide two seed methods, one taking an integer and an addtional one taking a sequence generator. Since the signature of both methods is overlapping the only way of telling whether you have an integer or a sequence generator is by using techniques like the ones above. And so did Thijs in his code. So better get used to it, I guess.

By the way compile time calculations are quite popular. There is even a template programming raytracer. I might port some QuantLib pricing engines to compile time versions if the adjoint project should die for some reason.

The final implementation of setCouponPricers in the real code now looks as follows

template <template <class> class S, class T>
void setCouponPricers(
    const typename Leg_t<T>::Type &leg,
    const std::vector<boost::shared_ptr<S<T> > > &pricers,
    typename boost::enable_if<boost::is_base_of<FloatingRateCouponPricer_t<T>,
                                                S<T> > >::type *dummy = 0) {
/* ... */

Again a good example that templated code together with some dirty generic programming tricks does not necessarily loose readability.

Interlude: SFINAE tricks