Tuesday, September 17, 2013

Martin Odersky teaches Scala to the Masses

I tried today the Scala courses on coursera by the Scala creator, Martin Odersky. I was quite impressed by the quality: I somehow believed Scala to be only a better Java, now I think otherwise. Throughout the course, even though it all sounds very basic, you understand the key concepts of Scala and why functional programming + OO concepts are a natural idea. What's nice about Scala is that it avoids the functional vs OO or even the functional vs procedural debate by allowing both, because both can be important, at different scales. Small details can be (and probably should be) procedural for efficiency, because a processor is a processor, but higher level should probably be more functional (immutable) to be clearer, easier to evolve and more easily parallelized.

I recently saw a very good example at work recently of how mutability could be very problematic, with no gain in this case because it was high level (and likely just the result of being too used to OO concepts).

I believe it will make my code more functional programming oriented in the future, especially at the high level.

Martin Odersky teaches Scala to the Masses

I tried today the Scala courses on coursera by the Scala creator, Martin Odersky. I was quite impressed by the quality: I somehow believed Scala to be only a better Java, now I think otherwise. Throughout the course, even though it all sounds very basic, you understand the key concepts of Scala and why functional programming + OO concepts are a natural idea. What's nice about Scala is that it avoids the functional vs OO or even the functional vs procedural debate by allowing both, because both can be important, at different scales. Small details can be (and probably should be) procedural for efficiency, because a processor is a processor, but higher level should probably be more functional (immutable) to be clearer, easier to evolve and more easily parallelized.

I recently saw a very good example at work recently of how mutability could be very problematic, with no gain in this case because it was high level (and likely just the result of being too used to OO concepts).

I believe it will make my code more functional programming oriented in the future, especially at the high level.

Thursday, September 12, 2013

Setting Values in Java Enum - A Bad Idea

My Scala habits have made me create a stupid bug related to Java enums. In Scala, the concept of case classes is very neat and recently, I just confused enum in Java with what I sometimes do in Scala case classes.

I wrote an enum with a setter like:

    public static enum BlackVariateType {
        V0,
        ZERO_DERIVATIVE;

        private double volSquare;
        public double getBlackVolatilitySquare() {
            return volSquare;
        }

        public void setBlackVolatilitySquare(double volSquare) {
            this.volSquare = volSquare;
        }

   }

Here, calling setBlackVolatilitySquare will override any previous value, and thus, if several parts are calling it with different values, it will be a mess as there is only a single instance.

I am not sure if there is actually one good use case to have a setter on an enum. This sounds like a very dangerous practice in general. Member variables allowed should be only final.

Setting Values in Java Enum - A Bad Idea

My Scala habits have made me create a stupid bug related to Java enums. In Scala, the concept of case classes is very neat and recently, I just confused enum in Java with what I sometimes do in Scala case classes.

I wrote an enum with a setter like:

    public static enum BlackVariateType {
        V0,
        ZERO_DERIVATIVE;

        private double volSquare;
        public double getBlackVolatilitySquare() {
            return volSquare;
        }

        public void setBlackVolatilitySquare(double volSquare) {
            this.volSquare = volSquare;
        }

   }

Here, calling setBlackVolatilitySquare will override any previous value, and thus, if several parts are calling it with different values, it will be a mess as there is only a single instance.

I am not sure if there is actually one good use case to have a setter on an enum. This sounds like a very dangerous practice in general. Member variables allowed should be only final.

Thursday, September 05, 2013

Making Classic Heston Integration Faster than the Cos Method

A coworker pointed to me that Andersen and Piterbarg book "Interest Rate Modeling" had a chapter on Fourier integration applied to Heston. The authors rely on the Lewis formula to price vanilla call options under Heston.
Lewis formula

More importantly, they strongly advise the use of a Black-Scholes control variate. I had read about that idea before, and actually tried it in the Cos method, but it did not improve anything for the Cos method. So I was a bit sceptical. I decided to add the control variate to my Attari code. The results were very encouraging. So I pursued on implementing the Lewis formula and their basic integration scheme (no change of variable).
Attari formula

Carr-Madan formula (used by Lord-Kahl)

Heston formula

Cos formula
My impression is that the Lewis formula is not so different from the Attari formula in practice: both have a quadratic denominator, and are of similar complexity. The Lewis formula makes the Black-Scholes control variate real (the imaginary part of the characteristic function is null). The Cos formula looks quite different, but it actually is not that much different as the Vk are quadratic in the denominator as well. I still have this idea of showing how close it is to Attari in spirit.

My initial implementation of Attari relied on the log transform described by Kahl-Jaeckel to move from an infinite integration domain to a finite domain. As a result adaptive quadratures (for example based on Simpson) provide better performance/accuracy ratio than a very basic trapezoidal rule as used by Andersen and Piterbarg. If I remove the log transform and truncate the integration according by Andersen and Piterbarg criteria, pricing is faster by a factor of x2 to x3.

This is one of the slightly surprising aspect of Andersen-Piterbarg method: using a very basic integration like the Trapezoidal rule is enough. A more sophisticated integration, be it a Simpson 3/8 rule or some fancy adaptive Newton-Cotes rule does not lead to any better accuracy. The Simpson 3/8 rule won't increase accuracy at all (although it does not cost more to compute) while the adaptive quadratures will often lead to a higher number of function evaluations or a lower overall accuracy.

Here is the accuracy on put options with a maturity of 2 years:
I had to push to 512 points for the Cos method and L=24 (truncation) in order to have a similar accuracy as Attari and Andersen-Piterbarg with 200 points and a control variate. For 1000 options here are the computation times (the difference is smaller for 10 options, around 30%):

Attari 0.023s
Andersen-Piterbarg 0.024s
Cos 0.05s

Here is the accuracy on put options with a maturity of 2 days:

All methods used 200 points. The error is nearly the same for all. And the Cos method takes now only 0.02s. The results are similar with a maturity of 2 weeks.

Conclusion
The Cos method performs less well on longer maturities. Attari or Lewis formula with control variate and caching of the characteristic function are particularly attractive, especially with the simple Andersen-Piterbarg integration.


Making Classic Heston Integration Faster than the Cos Method

A coworker pointed to me that Andersen and Piterbarg book "Interest Rate Modeling" had a chapter on Fourier integration applied to Heston. The authors rely on the Lewis formula to price vanilla call options under Heston.
Lewis formula

More importantly, they strongly advise the use of a Black-Scholes control variate. I had read about that idea before, and actually tried it in the Cos method, but it did not improve anything for the Cos method. So I was a bit sceptical. I decided to add the control variate to my Attari code. The results were very encouraging. So I pursued on implementing the Lewis formula and their basic integration scheme (no change of variable).
Attari formula

Carr-Madan formula (used by Lord-Kahl)

Heston formula

Cos formula
My impression is that the Lewis formula is not so different from the Attari formula in practice: both have a quadratic denominator, and are of similar complexity. The Lewis formula makes the Black-Scholes control variate real (the imaginary part of the characteristic function is null). The Cos formula looks quite different, but it actually is not that much different as the Vk are quadratic in the denominator as well. I still have this idea of showing how close it is to Attari in spirit.

My initial implementation of Attari relied on the log transform described by Kahl-Jaeckel to move from an infinite integration domain to a finite domain. As a result adaptive quadratures (for example based on Simpson) provide better performance/accuracy ratio than a very basic trapezoidal rule as used by Andersen and Piterbarg. If I remove the log transform and truncate the integration according by Andersen and Piterbarg criteria, pricing is faster by a factor of x2 to x3.

This is one of the slightly surprising aspect of Andersen-Piterbarg method: using a very basic integration like the Trapezoidal rule is enough. A more sophisticated integration, be it a Simpson 3/8 rule or some fancy adaptive Newton-Cotes rule does not lead to any better accuracy. The Simpson 3/8 rule won't increase accuracy at all (although it does not cost more to compute) while the adaptive quadratures will often lead to a higher number of function evaluations or a lower overall accuracy.

Here is the accuracy on put options with a maturity of 2 years:
I had to push to 512 points for the Cos method and L=24 (truncation) in order to have a similar accuracy as Attari and Andersen-Piterbarg with 200 points and a control variate. For 1000 options here are the computation times (the difference is smaller for 10 options, around 30%):

Attari 0.023s
Andersen-Piterbarg 0.024s
Cos 0.05s

Here is the accuracy on put options with a maturity of 2 days:

All methods used 200 points. The error is nearly the same for all. And the Cos method takes now only 0.02s. The results are similar with a maturity of 2 weeks.

Conclusion
The Cos method performs less well on longer maturities. Attari or Lewis formula with control variate and caching of the characteristic function are particularly attractive, especially with the simple Andersen-Piterbarg integration.