A Double Precision Puzzle with the Gaussian

Some library computes the Gaussian density function $$e^{-\frac{x^2}{2}}$$ the following way:

xsq = fint(x * 1.6) / 1.6;
del = (x - xsq) * (x + xsq);
result = exp(-xsq * xsq * 0.5) * exp(-del *0.5);

where fint(z) computes the floor of z.

Basically, x*x is rewritten as xsq*xsq+del. I have seen that trick once before, but I just can’t figure out where and why (except that it is probably related to high accuracy issues).

The answer is in the next post.

A Seasoned Volatility Swap

This is very much what's in the Carr-Lee paper "Robust Replication of Volatility Derivatives", but it wasn't so easy to obtain in practice:
  • The formulas as written in the paper are not usable as is: they can be simplified (not too difficult, but intimidating at first)
  • The numerical integration is not trivial: a simple Gauss-Laguerre is not precise enough (maybe if I had an implementation with more points), a Gauss-Kronrod is not either (maybe if we split it in different regions). Funnily a simple adaptive Simpson works ok (but my boundaries are very basic: 1e-5 to 1e5).

A Volatility Swap and a Straddle

A Volatility swap is a forward contract on future realized volatility. The pricing of such a contract used to be particularly challenging, often either using an unprecise popular expansion in the variance, or a model specific way (like Heston or local volatility with Jumps). Carr and Lee have recently proposed a way to price those contracts in a model independent way in their paper “robust replication of volatility derivatives”. Here is the difference between the value of a synthetic volatility swap payoff at maturity (a newly issued one, with no accumulated variance) and a straddle.

Those are very close payoffs!

I wonder how good is the discrete Derman approach compared to a standard integration for such a payoff as well as how important is the extrapolation of the implied volatility surface.The real payoff (very easy to obtain through Carr-Lee Bessel formula):

Parallel Can Be Slower

I found a nice finite difference scheme, where the solving part can be parallelized on 2 processors at each time-step

I was a bit surprised to notice that the parallelized algorithm ran in some cases twice slower than the same algorithm not parallelized. I tried ForkJoinPool, ThreadPoolExecutor, my one notify/wait based parallelization. All resulted in similar performance compared to just calling thread1.run() and thread2.run() directly.

I am still a bit puzzled by the results. Increasing the time of the task by increasing the number of discretization points does not really improve the parallelization. The task is relatively fast to perform and is repeated many (around of 1000) times, so synchronized around 1000 times, which is likely why parallelization is not great on it: synchronization overhead reaps any benefit of the parallelization. But I expected better. Using a Thread pool of 1 thread is also much slower than calling run() twice (and fortunately slower than the pool of 2 threads).

Scala is Mad (part 2)

I still did not abandon Scala despite my previous post, mainly because I have already quite a bit of code, and am too lazy to port it. Furthermore the issues I detailed were not serious enough to motivate a switch. But these days I am more and more fed up with Scala, especially because of the Eclipse plugin. I tried the newer, the beta, and the older, the stable, the conclusion is the same. It’s welcome but:code completion is not great compared to Java. For example one does not seem to be able to see the constructor parameters, or the method parameters can not be automatically populated.the plugin makes Eclipse very slow. Everything seems at least 3-5x slower. On the fly compilation is also much much slower than Java’s.It’s nice to type less, but if overall writing is slower because of the above issues, it does not help. Beside curiosity of a new language features, I don’t see any point in Scala today, even if some of the ideas are interesting. I am sure it will be forgotten/abandoned in a couple of years. Today, if I would try a new language, I would give Google Go a try: I don’t think another big language can make it/be useful on the JVM (beside a scripting kind of language, like JavaScript or Jython).Google Go focuses on the right problem: concurrency. It also is not constrained to JVM limitation (on the other side one can not use a Java library - but open source stuff is usually not too difficult to port from one language to another). It has one of the fastest compilers. It makes interesting practical choices: no inheritance.

From OpenSuse to Ubuntu 13.04

In my Linux quest, I changed distribution again on my home desktop, from OpenSuse 11.1 with KDE to Ubuntu 13.04 (not yet released - so alpha) with Unity. Why?

  • KDE was crashing a bit too often for my taste (more than once a week). Although I enjoyed the KDE environment.
  • Not easy to transfer files to my Android 4.2 phone. Ubuntu 13.04 is not fully there yet, but is on its way.
  • zypper is a bit too specific for my taste. I would be ok with yum+rpm or apt-get, but another tool just for a single distribution, not really.
  • Plus I’m just curious what’s next for Ubuntu, and Linux makes it very simple to change distributions, and reinstall applications with the same settings. So it’s never a big task to change distribution.
  • I somehow like how Ubuntu feels, not sure what it is exactly, maybe the Debian roots.

When people say OpenSuse is rock solid, I don’t have that impression, at least on the desktop. It might have been true in the past. But in the past, most distros were very stable. I never remember having big stability issues until, maybe, late 2010. In the early 2000s, a laptop would work very well with Linux, suspend included. I remember that my Dell Inspiron 8200 was working perfectly with Mandrake and WindowMaker. Nowadays, it never seem to work that well, but is just ok: Optimus comes to mind (especially with external screen), suspend problems, wifi (not anymore).

So far I can see that Ubuntu 13.04 is prettier than the past versions, the installer is great. I encrypted my two hard disks, it was just a matter of ticking a box - very nice. Unity Launcher, while interesting, is still not the greatest tool to find an installed application (compared to KDE launcher or Gnome Shell). I don’t notice any stability issue so far, even though I have some popup messages that sometimes tells me something crashed (typical for an alpha version). If I just ignore the messages, everything seems fine. OpenSuse-KDE was logging me out (session crash), or just stopped completely being responsive (hard reset necessary).

Productivity Zero

Sometimes it feels like big companies try to enforce the Productivity Zero rule.

Here is a guide:

  • involve as many team as possible. It will help ensuring endless discussions about who is doing what, and then how do I interface with them. This is most (in)efficient when team managers interact and are not very technically competent. One consequence is that nobody is fully responsible/accountable, which helps reinforce the productivity zero.
  • meetings, meetings and meetings. FUD (Fear Uncertainty and Doubt) is king here. By spreading FUD, there will be more and more meetings. Even if the “project” is actually not a real project, but amounts to 10 lines of code, it is possible to have many meetings around it, over the span of several months (because everybody is always busy with other tasks). Another strategy is to use vocabulary, talk about technical or functional parts the others don’t understand. Some people are masters are talking technical to functional people and vice versa.
  • multiply by 20, not 2. It is surprisingly easy to tell upper management something is going to take 3 months, when, in fact, it can be done in 3 days. This is a bit like bargaining in South East Asia: it’s always amazing to find out how much you can push the price down (or how much they push it up).
  • hire as many well paid managers and product specialists as you can, and make sure they know nothing about the product or the functional parts but are very good at playing the political game without any content. Those people often manage to stay a long time, a real talent.

Better Finite Difference Boundaries with a Tridiagonal Solver

In Pricing Financial Instruments - The Finite Difference Method, Tavella and Randall explain that boundary conditions using a higher order discretization (for example their “BC2” boundary condition) can not be solved in one pass with a simple tridiagonal solver, and suggest the use of SOR or some conjugate gradient based solver.

It is actually very simple to reduce the system to a tridiagonal system. The more advanced boundary conditions only use 3 adjacent values, just 1 value makes it non tridiagonal, the one in bold is the following matrix representation

x x x

x x x

  x x x

    ……

      x x x

        x x x

        x x x

One just needs to replace the first line by a simple linear combination of the first 2 lines to remove the extra x and similarly for the last 2 lines. This amounts to ver little computational work. Then one can use a standard tridiagonal solver. This is how I implemented it in a past post about boundary conditions of a bond in the CIR model. It is very surprising that they did not propose that simple solution in an otherwise very good book.

Non Central Chi Squared Distribution in Java or Scala

I was looking for an implementation of the non central chi squared distribution function in Java, in order to price bond options under the Cox Ingersoll Ross (CIR) model and compare to a finite difference implementation. It turned out it was not so easy to find existing code for that in a library. I would have imagined that Apache common maths would do this but it does not.

OpenGamma has a not too bad implementation. It relies on Apache commons maths for the Gamma function implementation. I looked at what was there in C/C++. There is some old fortran based ports with plenty of goto statements. There is also a nice implementation in the Boost library. It turns out it is quite easy to port it to Java. One still needs a Gamma function implementation, I looked at Boost implementation of it and it turns out to be very similar to the Apache commons maths one (which is surprisingly not too object oriented and therefore quite fast - maybe they ported it from Boost or from a common source).

The Boost implementation seems much more robust in general thanks to:
  • The use of complimentary distribution function when the value is over 0.5. One drawback is that there is only one implementation of this, the Benton and Krishnamoorthy one, which is a bit slower than Ding's method.
  • Reverts to Benton and Krishnamoorthy method in high non-centrality cases. Benton and Krishnamoorthy is always very accurate, while Fraser (used by OpenGamma) is not very precise in general.
It is interesting to note that both implementations of Ding method are wildly different, Boost implementation has better performance and is simpler (I measured that my Java port is around 50% faster than OpenGamma implementation).

If only it was simple to commit to open-source projects while working for a software company...

Finite Difference Approximation of Derivatives

A while ago, someone asked me to reference him in a paper of mine because I used formulas of a finite difference approximation of a derivative on a non uniform grid. I was shocked as those formula are very widespread (in countless papers, courses and books) and not far off elementary mathematics.

There are however some interesting old papers on the technique. Usually people approximate the first derivative by the central approximation of second order:

$$ f’(x) = \frac{f(x_{i+1})-f(x_{i-1})}{x_{i+1} - x_{i-1}} $$

However there are some other possibilities. For example one can find a formula directly out of the Taylor expansions of \(f(x_{i+1})\) and \(f(x_{i-1})\). This paper and that one seems to indicate it is more precise, especially when the grid does not vary smoothly (a typical example is uniform by parts).

This can make a big difference in practice, here is the example of a Bond priced under the Cox-Ingersoll-Ross model by finite differences. EULER is the classic central approximation, EULER1 uses the more refined approximation based on Taylor expansion, EULER2 uses Taylor expansion approximation as well as a higher order boundary condition. I used the same parameters as in the Tavella-Randall book example and a uniform grid between [0, 0.2] except that I have added 2 points at the far end at 0.5 and 1.0. So the only difference between EULER and EULER1 lies in the computation of derivatives at the 3 last points.

I also computed the backward 2nd order first derivative on a non uniform grid (for the refined boundary). I was surprised not to find this easily on the web, so here it is: $$ f’(x_i) = \left(\frac{1}{h_i}+\frac{1}{h_i+h_{i-1}}\right) f(x_i)- \left(\frac{1}{h_{i-1}}+\frac{1}{h_i}\right) f(x_{i-1})+ \left(\frac{1}{h_{i-1}} - \frac{1}{h_i+h_{i-1}} \right) f(x_{i-2}) + …$$

Incidently while writing this post I found out it was a pain to write Math in HTML (I initially used a picture). MathML seems a bit crazy, I wonder why they couldn’t just use the LaTeX standard.

Update January 3rd 2013 - I now use Mathjax. It’s not very good solution as I think this should typically be handled by the browser directly instead of huge javascript library, but it looks a bit better

Previous

Next