This is a note for those who want to setup a Samsung wireless printer under Linux. It is quite simple,
this forum post helped me, the actual useful steps on Fedora 25 are:
download tar.gz linux driver from Samsung website. As root, unpack & install:
tar xvzf SamsungPrinterInstaller.tar.gz
cd uld
./install.sh
in the printer menu, lookup for the wireless key (8 digits),
connect to the printer Wifi network with a computer using the wireless key,
A couple weeks ago, I wrote about a new Heston discretisation scheme which was at least as accurate as Andersen QE scheme and faster, called DVSS2.
It turns out that it does not behave very well on the following Vanilla forward start option example (which is quite benign).
The Heston parameters comes from a calibration to the market and are
On a standard vanilla option, DVSS2 behaves as advertised in the paper but not on a forward-start option
with forward start date at \(T_1=\frac{7}{8}\) (relatively close to the maturity).
A forward start call option will pay \(\max(S(T)-k S(T_1),0)\).
This is particularly visible on the following graph of the price against
the time-step size (1,1/2,1/4,1/8,1/16,1/32), for strikes 100% and 140% (it works well for strike=70%)
where 32 time-steps are necessary.
It would appear that the forward dynamic is sometimes poorly captured by the DVSS2 scheme.
This makes DVSS2 not competitive in practice compared to Andersen’s QE or even Alfonsi as it can not be trusted
for a time step larger than 1/32.
Note that I insert an extra step at 7/8 for time step sizes greater or equal than 1/4: a time-step size of 1 corresponds in reality
to two time-steps respectively of size 7/8 and 1/8.
The error is actually because the log-asset process is sampled using a discrete random variables that matches
the first 5 moments of the normal distribution. The so-called step 5 of the algorithm specifies:
$$\hat{X} := \bar{x} + \xi \sqrt{\frac{1}{2}(\bar{y}+\hat{Y})h}$$
The notation is quite specific to the paper, what you need to know is that \(\hat{X}\) corresponds to the log-asset process
while \(\hat{Y}\) corresponds to the stochastic volatility process and \(\xi\) is the infamous discrete random variable.
In reality, there is no good reason to use a discrete random variable beside lowering the computational cost.
And it is obviously detrimental in the limit case where the volatility is deterministic (Black-Scholes case) as then
the log-process will only match the first 5 moments of the normal distribution, while it should be exactly normal.
Replacing \(\xi\) by a standard normally distributed random variable is enough to fix DVSS2. Note
that it could also be discretized like the QE scheme, using a Broadie-Kaya interpolation scheme.
The problem is that then, it is not faster than QE anymore. So it is not clear why it would be preferable.
A discrete distribution matching the first 9 moments of the normal distribution
I then tried to see if matching more moments with a discrete distribution would help. More than 8 bits
are available when generating a uniform random double precision number (as it is represented by 53 bits).
The game is then to find nodes so that the distribution with discrete probabilities in i/256 with some interger i
match the moments of the normal distribution. It is a non linear problem unfortunately, but I found a solution
for the probabilities
The gnome shell has been crashing on me more regularly lately.
XFCE is a good and fast more tradional desktop, but, from past experiences,
it does not play well with power management if you only install it via
dnf install @xfce-desktop-environment
My typical experience is a black screen after resuming from suspend (sometimes, not always), or hibernate (always) and
most of the time I end up just rebooting. It turns out this is all caused by the interaction between the gdm login daemon and xfce. Moving
to the lightdm login daemon instead fixes those issues for Fedora 25:
Many papers present formulae to price Asian options in the Black-Scholes world only for the fixed strike Asian case, that is
a contract that pays \( \max(A-K,0)\) at maturity \(T\) where \(A = \sum_{i=0}^{n-1} w_i S(t_i) \) is the Asian average.
More generally, this can be seen as the payoff of a Basket option where the underlyings are just the same asset but at different times.
And any Basket option formula can actually be used to price fixed-strike Asian options by letting the correlation correspond to the correlation between the asset at the averaging times
and the variances correspond to the variance at each averaging time. The basket approach allows then naturally for a term-structure of rates, dividends and volatilities.
We have:
$$
V_{\textsf{floating}}(\eta,S_0,k,\bar{\sigma}_i^2 t_i, C(0,t_i), B(0,T)) =\\
\quad k V_{\textsf{fixed}}\left(-\eta,S_0, \frac{S_0}{k},\bar{\sigma}^2(T) T-\bar{\sigma}_i^2 t_i,\frac{C(0,t_i)}{C(0,T)},B(0,T)C(0,T) \right)
$$
where \(\eta = \pm 1\) for a call (respectively a put), \(\bar{\sigma}_i\) are the Vanilla options implied volatilities at the averaging times \(t_i\), \(C(0,t_i)\) are the capitalization factors, and \(B(0,T)\) is the discount factor.
Proof:
We assume that \(S\) follows \(dS = \mu_t S dt + \sigma_t S dW_t\). The discount factor \(B\) is defined as \(B(0,T)=e^{-\int_{0}^T r_s ds}\). Let \(C(0,t) = e^{\int_{0}^t \mu_s ds}\). The process associated to the forward to time \(t\) is \(F_t=S_0 C(0,t)M_t\) with \(M_t = e^{\int_0^t \sigma_s dW_s - \frac{1}{2}\int_0^t \sigma_s^2 ds}\) being a martingale.
We have:
$$
V_{\textsf{floating}}(\eta,S_0,k,\bar{\sigma}_i^2 t_i, C(0,t_i), B(0,T))\\
= B(0,T)\mathbb{E}\left[\max\left(\eta F_T-\eta k\sum_{i=0}^{n-1}w_i F_{t_i},0\right)\right]\\
= k B(0,T)\mathbb{E}\left[\max\left(\eta\frac{1}{k}F_T-\eta\sum_{i=0}^{n-1}w_i F_{t_i},0\right)\right]\\
=k B(0,T)C(0,T)\mathbb{E}\left[ M_T \max\left(\eta\frac{S_0}{k}-\eta\sum_{i=0}^{n-1}w_i S_0 \frac{C(0,t_i)M_{t_i}}{C(0,T) M_T },0\right)\right]
$$
We now proceed to a change of measure defined by \(M_T\). Under the new measure \(\mathbb{Q}^T\), \(\bar{W}_t = W_t - \int_0^t \sigma_s ds\) is a Brownian motion. \(\frac{M_t}{M_T}\) under \(\mathbb{Q}\) has the same law as \(\frac{M_t}{M_T} e^{-\int_t^T \sigma_s^2 ds}\) under \(\mathbb{Q}^T\) or equivalently as
\(\bar{M}_t = e^{\int_t^T \sigma_s d\bar{W}_s - \frac{1}{2}\int_t^T \sigma_s^2 ds}\) under \(\mathbb{Q}^T\).
Defining \(\mathbb{E}^T\) to be the expectation under \(\mathbb{Q}^T\), we thus have
$$
V_{\textsf{floating}}(\eta,S,k,\bar{\sigma}_i^2 t_i, C(0,t_i), B(0,T))\\
=k B(0,T)C(0,T)\mathbb{E}^{T}\left[\max\left(\eta\frac{S_0}{k}-\eta\sum_{i=0}^{n-1}w_i S_0 \frac{C(0,t_i)}{C(0,T)}\bar{M}_{t_i},0\right)\right]\\
=k V_{\textsf{fixed}}\left(-\eta,S_0, \frac{S_0}{k},\bar{\sigma}^2(T) T-\bar{\sigma}_i^2 t_i,\frac{C(0,t_i)}{C(0,T)},B(0,T)C(0,T) \right)
$$
This concludes the proof.
With the increasing use of the b.p. (a.k.a Normal, a.k.a Bachelier) vols, a natural question (that Gary asked) is:
Is there a similar rule for the Normal volatility?
It turns out that the answer is not as simple. It can easily be shown that the normal volatility can not grow faster than linear
in strike (not the variance this time). But it does not mean that linear is acceptable, in fact, it is not.
The upper bound can be refined to
$$ \frac{K-F}{\sqrt{2T \ln \frac{K}{F}}} $$
It is still an asymptotic upper bound only.
It can be shown that any number larger than the factor 2 in the denominator will not lead to asymptotic arbitrage.
The boundary could even be made tighter with, I suspect, additional terms in \( \ln \ln K \), but there is no
simple exact formula for the limit as in the Black-Scholes world.
I stumbled recently upon a new Heston discretisation scheme, in the spirit of Alfonsi, not more complex and more accurate.
My first attempt at coding the scheme resulted in a miserable failure even though the described algorithm looked
not too difficult. I started wondering if the paper, from a little known Lithuanian mathematical journal, was any good.
Still, the math in it is very well written, with a great emphasis on the settings for each proposition.
I decided to simply send an email to Prof. Mackevicius, and got a reply the next day (the internet is wonderful sometimes).
The exchange helped me to find out that the error was not where I was looking. After spending a bit more time on the paper, I discovered there was simply a missing step
in the algorithm.
In between step 3 and step 4, we should have
$$ \bar{x} = \frac{\bar{x}\sigma - \bar{y}\rho}{\sigma^2 \sqrt{1-\rho^2}}$$
$$ \bar{y} = \frac{\bar{y}}{\sigma^2}$$
corresponding to the transformation between equations 4.2 and 4.3 of the 2015 paper.
With the added step, the scheme works well. Even if there is clearly an effort from the authors to make their
very mathematically detailed paper more practical with a description of an algorithm, it looks like I have been the first person
to actually try it.
Jesper Andreasen and Brian Huge propose an arbitrage-free interpolation method
based on a single-step forward Dupire PDE solution in their paper Volatility interpolation.
To do so, they consider a piecewise constant representation of the local volatility in maturity time and strike
where the number of constants matches the number of market option prices.
An interesting example that shows some limits to the technique as described in Jesper Andreasen and Brian Huge paper comes from
Nabil Kahale paper on an arbitrage-free interpolation of volatilities.
Yes, the data is quite old, and as a result, not of so great quality. But it will well illustrate the issue.
The calibration of the piecewise constant volatilities on a uniform grid of 200 points (on the log-transformed problem) leads to a perfect fit:
the market vols are exactly reproduced by the following piecewise constant vols:
However, if we increase the number of points to 400 or even much more (to 2000 for example), the fit is not perfect anymore, and
some of the piecewise constant vols explode (for the first two maturities), even though there is no arbitrage in the market option prices.
The single step continuous model can not represent the market implied volatilities, while for some reason,
the discrete model with 200 points can. Note that the model vols were capped, otherwise they would explode even higher.
If instead of using a piecewise constant representation, we consider a continuous piecewise linear interpolation
(a linear spline with flat extrapolation), where each node falls on the grid point closest market strike, the calibration
becomes stable regardless of the number of grid points.
The RMSE is back to be close to machine epsilon. As a side effect the Levenberg-Marquardt minimization takes much less iterations to converge, either with 200 or 400 points when compared to the piecewise constant model,
likely because the objective function derivatives are smoother.
In the most favorable case for the piecewise constant model,
the minimization with the linear model requires about 40% less iterations.
We could also interpolate with a cubic spline, as long as we make sure that the volatility does not go below zero, for example by imposing a limit on the derivative values.
Overall, this raises questions on the interest of the numerically much more complex continuous time version of the piecewise-constant model
as described in Filling the gaps by Alex Lipton and Artur Sepp: a piecewise constant representation is too restrictive.
We will assume zero interest rates and no dividends on the asset \(S\) for clarity.
The results can be easily generalized to the case with non-zero interest rates and dividends.
Under those assumptions, the Black-Scholes PDE is:
$$ \frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 S^2 \frac{\partial^2 V}{\partial S^2} = 0.$$
An implicit Euler discretisation on a uniform grid in \(S\) of width \(h\) with linear boundary conditions (zero Gamma) leads to:
$$ V^{k+1}_i - V^{k}_i = \frac{1}{2}\sigma^2 \Delta t S_i^2 \frac{V^{k}_{i+1}-2V^k_{i}+V^{k}_{i-1}}{h^2}.$$
for \(i=1,…,m-1\) with boundaries
$$ V^{k+1}_i - V^{k}_i = 0.$$
for \(i=0,m\).
This forms a linear system \( M \cdot V^k = V^{k+1} \) with \(M\) is a tridiagonal matrix where each of its rows sums to 1 exactly.
Furthermore, the payoff corresponding to the forward price \(V_i = S_i\) is exactly preserved as well by such a system as the discretized second derivative will be exactly zero.
The scheme can be seen as preserving the zero-th and first moments.
As a consequence, by linearity, the put-call parity relationship will hold exactly (note that in between nodes, any interpolation used should also be consistent with the put-call parity for the result to be more general).
This result stays true for a non-uniform discretisation, and with other finite difference schemes as shown in this paper.
It is common to consider the log-transformed problem in \(X = \ln(S)\) as the diffusion is constant then, and a uniform grid much more adapted to the process.
$$ \frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 \frac{\partial^2 V}{\partial X^2}-\frac{1}{2}\sigma^2 \frac{\partial V}{\partial X} = 0.$$
An implicit Euler discretisation on a uniform grid in \(X\) of width \(h\) with linear boundary conditions (zero Gamma in \(S\) ) leads to:
$$ V^{k+1}_i - V^{k}_i = \frac{1}{2}\sigma^2 \Delta t \frac{V^{k}_{i+1}-2V^k_{i}+V^{k}_{i-1}}{h^2}-\frac{1}{2}\sigma^2 \Delta t \frac{V^{k}_{i+1}-V^{k}_{i-1}}{2h}.$$
for \(i=1,…,m-1\) with boundaries
$$ V^{k+1}_i - V^{k}_i = 0.$$
for \(i=0,m\).
Such a scheme will not preserve the forward price anymore. This is because now, the forward price is \(V_i = e^{X_i}\). In particular, it is not linear in \(X\).
It is possible to preserve the forward by changing slightly the diffusion coefficient, very much as in the exponential fitting idea. The difference is that, here, we are not interested
in handling a large drift (when compared to the diffusion) without oscillations, but merely to preserve the forward exactly.
We want the adjusted volatility \(\bar{\sigma}\) to solve
$$\frac{1}{2}\bar{\sigma}^2 \frac{e^{h}-2+e^{-h}}{h^2}-\frac{1}{2}\sigma^2 \frac{e^{h}-e^{-h}}{2h}=0.$$
Note that the discretised drift does not change, only the discretised diffusion term. The solution is:
$$\bar{\sigma}^2 = \frac{\sigma^2 h}{2} \coth\left(\frac{h}{2} \right) .$$
This needs to be applied only for \(i=1,…,m-1\).
This is actually the same adjustment as the exponential fitting technique with a drift of zero. For a non-zero drift, the two adjustments would differ, as the exact forward adjustment will stay the same, along with an adjusted discrete drift.
We have seen earlier that a simple parabola allows to capture the smile of AAPL 1m options surprisingly well. For very high and very low strikes,
the parabola does not obey Lee’s moments formula (the behavior in the wings needs to be at most linear in variance/log-moneyness).
Extrapolating the volatility smile in the low or high strikes in a smooth \(C^2\) fashion is however not easy.
A surprisingly popular so called “arbitrage-free”
method is the extrapolation of Benaim, Dodgson and Kainth developed to remedy the negative density of SABR in interest rates as
well as to give more control over the wings.
The call options prices (right wing) are extrapolated as:
$$
C(K) = \frac{1}{K^\nu} e^{a_R + \frac{b_R}{K} + \frac{c_R}{K^2}} \text{.}
$$
\(C^2\) continuity conditions for the right wing at strike \(K_R\) lead to:
$$
c_R =\frac{{C’}_R}{C_R}K_{R}^3+ \frac{1}{2}K_{R}^2 \left(K_{R}^2 \left(- \frac{{C’}_{R}^{2}}{C_{R}^2}+ \frac{{C’’}_R}{C_R}\right) + \nu \right)\text{,} \
$$
$$
b_R = - \frac{{C’}_R}{C_R} K_R^2 - \nu K_R-2 \frac{c_R}{K_R}\text{,}\
$$
$$
a_R = \log(C_R)+ \nu \log(K_R) - \frac{b_R}{K_R} - \frac{c_R}{K_{R}^2}\text{.}
$$
The \( \ nu \) parameters allow to adjust the shape of the extrapolation.
Unfortunately it does not really work for equities.
Very often, the extrapolation will explode, which is what we wanted to avoid in the first place. We illustrate it here on
our best fit parabola of the AAPL 1m options:
The \( \nu\) parameter does not help either.
Update Dec 5 2016
Here are details about call option price, slope, curvature:
The strike cutoff is \(K_R=115.00001307390328\). For the parabola,
For the least squares spline,
$$\begin{align}
C(K_R)&=0.03892674300426042,\\
C’(K_R)&=-0.00171386452499034,\\
C’’(K_R)&=0.0007835686926501496.
\end{align}$$
which results in
$$a_R=131.894286, b_R=-26839.217814, c_R=1550285.706087.$$
In finance, and also in science, the Mersenne-Twister is the de-factor pseudo-random number generator (PRNG) for Monte-Carlo simulations.
By the way, there is a recent 64-bit maximally equidistributed version called MEMT19937 with 53-bit double precision floating point numbers in mind.
Historicaly, counter-based PRNGs based on cryptographic standards such as AES
were historically slow, which motivated the development of sequential PRNGs with good statistical properties,
yet not cryptographically strong like the Mersenne Twister for Monte-Carlo simulations.
The randomness of AES is of vital importance to its security making the use of the AES128 encryption algorithm as PRNG sound (see Hellekalek & Wegenkittl paper).
Furthermore, D.E. Shaw study shows that AES can be faster than Mersenne-Twister.
In my own simple implementation using the standard library of the Go language,
it was around twice slower than Mersenne-Twister to generate random double precision floats,
which can result in a 5% performance loss for a local volatility simulation.
The relative slowness could be explained by the type of processor used, but it is still competitive for Monte-Carlo use.
The code is extremely simple, with many possible variations around the same idea. Here is mine
Interestingly, the above code was 40% faster with Go 1.7 compared to Go 1.6, which resulted in a local vol Monte-Carlo simulation performance improvement of around 10%.
The stream cipher Salsa20 is another possible candidate to use as counter-based PRNG.
The algorithm has been selected as a Phase 3 design in the 2008 eSTREAM project organised by the European Union ECRYPT network,
whose goal is to identify new stream ciphers suitable for widespread adoption.
It is faster than AES in the absence of specific AES CPU instructions.
Our tests run with a straightforward implementation that does not make use of specific AMD64 instructions
and show the resulting PRNG to be faster than MRG63k3a and only 5% slower than MEMT19937 for local volatility Monte-Carlo simulations, that is the same speed as the above Go AES PRNG.
While it led to sensible results, there does not seem any study yet of its equidistribution properties.
Counter-based PRNGs are parallelizable by nature: if a counter is used as plaintext,
we can generate any point in the sequence at no additional cost by just setting
the counter to the point position in the sequence, the generator is not sequential.
Furthermore, alternate keys can be used to create independent substreams:
the strong cryptographic property will guarantee the statistical independence.
A PRNG based on AES will allow \(2^{128}\) substreams of period \(2^{128}\).