# Bumping Correlations

Jul 25, 2015 · 3 minute read · CommentsIn his book “*Monte Carlo Methods in Finance*”, P. Jäckel explains a simple way to clean up a correlation matrix. When a given correlation matrix is not positive semi-definite, the idea is to do a singular value decomposition (SVD), replace the negative eigenvalues by 0, and renormalize the corresponding eigenvector accordingly.

One of the cited applications is “*stress testing and scenario analysis for market risk*” or “*comparative pricing in order to ascertain the extent of correlation exposure for multi-asset derivatives*”, saying that “*In many of these cases we end up with a matrix that is no longer positive semi-definite*”.

It turns out that if one bumps an invalid correlation matrix (the input), that is then cleaned up automatically, the effect can be a very different bump. Depending on how familiar you are with SVD, this could be more or less obvious from the procedure,

As a simple illustration I take the matrix representing 3 assets A, B, C with rho_ab = -0.6, rho_ac = rho_bc = -0.5.

1.00000 -0.60000 -0.50000

-0.60000 1.00000 -0.50000

-0.50000 -0.50000 1.00000

For those rho_ac and rho_bc, the correlation matrix is not positive definite unless rho_ab in in the range (-0.5, 1). One way to verify this is to use the fact that positive definiteness is equivalent to a positive determinant. The determinant will be 1 - 2*0.25 - rho_ab^2 + 2*0.25*rho_ab.

After using P. Jaeckel procedure, we end up with:

1.00000 -0.56299 -0.46745

-0.56299 1.00000 -0.46745

-0.46745 -0.46745 1.00000

If we bump now rho_bc by 1% (absolute), we end up after cleanup with:

1.00000 -0.56637 -0.47045

-0.56637 1.00000 -0.46081

-0.47045 -0.46081 1.00000

It turns out that rho_bc has changed by only 0.66% and rho_ac by -0.30%, rho_ab by -0.34%. So our initial bump (0,0,1) has been translated to a bump (-0.34, -0.30, 0.66). In other words, it does not work to compute sensitivities.

One can optimize to obtain the nearest correlation matrix in some norm. Jaeckel proposes a hypersphere decomposition based optimization, using as initial guess the SVD solution. Higham proposed a specific algorithm just for that purpose. It turns out that on this example, they will converge to the same solution (if we use the same norm). I tried out of curiosity to see if that would lead to some improvement. The first matrix becomes

1.00000 -0.56435 -0.46672

-0.56435 1.00000 -0.46672

-0.46672 -0.46672 1.00000

And the bumped one becomes

1.00000 -0.56766 -0.46984

-0.56766 1.00000 -0.46002

-0.46984 -0.46002 1.00000

We find back the same issue, rho_bc has changed by only 0.67%, rho_ac by -0.31% and rho_ab by -0.33%. We also see that the SVD correlation or the real near correlation matrix are quite close, as noticed by P. Jaeckel.

Of course, one should apply the bump directly to the cleaned up matrix, in which case it will actually work as expected, unless our bump produces another non positive definite matrix, and then we would have correlation leaking a bit everywhere. It’s not entirely clear what kind of meaning the risk figures would have then.