Go for Monte-Carlo

I have looked a few months ago already at Julia, Dart, Rust and Scala programming languages to see how practical they could be for a simple Monte-Carlo option pricing.

I forgot the Go language. I had tried it 1 or 2 years ago, and at that time, did not enjoy it too much. Looking at Go 1.5 benchmarks on the computer language shootout, I was surprised that it seemed so close to Java performance now, while having a GC that guarantees pauses of less 10ms and consuming much less memory.

I am in general a bit skeptical about those benchmarks, some can be rigged. A few years ago, I tried my hand at the thread ring test, and found that it actually performed fastest on a single thread while it is supposed to measure the language threading performance. I looked yesterday at one Go source code (I think it was for pidigits) and saw that it just called a C library (gmp) to compute with big integers. It’s no surprise then that Go would be faster than Java on this test.

So what about my small Monte-Carlo test?

Well it turns out that Go is quite fast on it:

Multipl. Rust Go
1 0.005 0.007
10 0.03 0.03
100 0.21 0.29
1000 2.01 2.88

It is faster than Java/Scala and not too far off Rust, except if one uses FastMath in Scala, then the longest test is slighly faster with Java (not the other ones).

There are some issues with the Go language: there is no operator overloading, which can make matrix/vector algebra more tedious and there is no generic/template. The later is somewhat mitigated by the automatic interface implementation. And fortunately for the former, complex numbers are a standard type. Still, automatic differentiation would be painful.

Still it was extremely quick to grasp and write code, because it’s so simple, especially when compared to Rust. But then, contrary to Rust, there is not as much safety provided by the language. Rust is quite impressive on this side (but unfortunately that implies less readable code). I’d say that Go could become a serious alternative to Java.

I also found an interesting minor performance issue with the default Go Rand.Float64, the library convert an Int63 to a double precision number this way:

func (r *Rand) Float64() float64 {
	f := float64(r.Int63()) / (1 << 63)
	if f == 1 {
		f = 0
	}
	return f
}
I was interested in having a number in (0,1) and not [0,1), so I just used the conversion pattern from MersenneTwister 64 code:
f := (float64(r.Int63() >> 11) + 0.5) * (1.0/4503599627370496.0)
The reasoning behind this later code is that the mantissa is 52 bits, and this is the most accuracy we can have between 0 and 1. There is no need to go further, this also avoids the issue around 1. It’s also straightforward that is will preserve the uniform property, while it’s not so clear to me that r.Int63()/2^63 is going to preserve uniformity as double accuracy is higher around 0 (as the exponent part can be used there) and lesser around 1: there is going to be much more multiple identical results near 1 than near 0.

It turns out that the if check adds 5% performance penalty on this test, likely because of processor caching issues. I was surprised by that since there are many other ifs afterwards in the code, for the inverse cumulative function, and for the payoff.

Comments

comments powered by Disqus