Wednesday, December 22, 2010

Firefox 4 Is Great

I temporarily abandonned Firefox for Chrome/Chromium. I am now back at using Firefox as Firefox 4 is as fast or faster than Chrome and seems more stable, especially under linux. Also it does not send anything to Google and there is bookmark sync independently of Google.

I am impressed that Mozilla managed to improve Firefox that much.

Firefox 4 Is Great

I temporarily abandonned Firefox for Chrome/Chromium. I am now back at using Firefox as Firefox 4 is as fast or faster than Chrome and seems more stable, especially under linux. Also it does not send anything to Google and there is bookmark sync independently of Google.

I am impressed that Mozilla managed to improve Firefox that much.

Monday, November 29, 2010

Another Look at Java Matrix Libraries

A while ago, I was already looking for a good Java Matrix library, complaining that there does not seem any real good one where development is still active: the 2 best ones are in my opinion Jama and Colt.

Recently I tried to price options via RBF (radial basis functions) based on TR-BDF2 time stepping. This is a problem where one needs to do a few matrix multiplications and inverses (or better, LU solve) in a loop. The size of the matrix is typically 50x50 to 100x100, and one can loop between 10 and 1000 times.

Out of curiosity I decided to give ojalgo and MTJ a chance. I had read benchmarks (here about jblas and here the java matrix benchmark) where those libraries performed really well.

On my core i5 laptop under the latest 64bit JVM (Windows 7), I found out that for the 100x100 case, Jama was actually 30% faster than MTJ, and ojalgo was more than 50% slower. I also found out that I did not like ojalgo API at all. I was quite disappointed by those results.

So I tried the same test on a 6-core Phenom II (ubuntu 64bit), Jama was faster than MTJ by 0-10%. Ojalgo and ParallelColt were slower than Jama by more than 50% and 30%.

This does not mean that ojalgo and ParallelColt are so bad, maybe they behave much better than the simple Jama on large matrices. They also have more features, including sparse matrices. But Jama is quite a good choice for a default library, MTJ can also be a good choice, it can be faster and use less memory because most methods take the output matrix/vector as a parameter. Furthermore MTJ can use the native lapack and blas libraries for improved performance. The bigger the matrices, the most difference it will make.

RunJamaMTJMTJ native
10.1600.2400.140
20.0860.2000.220
100.0830.0890.056
(On a Phenom II under Ubuntu 10.10 64-bit)

Another Look at Java Matrix Libraries

A while ago, I was already looking for a good Java Matrix library, complaining that there does not seem any real good one where development is still active: the 2 best ones are in my opinion Jama and Colt.

Recently I tried to price options via RBF (radial basis functions) based on TR-BDF2 time stepping. This is a problem where one needs to do a few matrix multiplications and inverses (or better, LU solve) in a loop. The size of the matrix is typically 50x50 to 100x100, and one can loop between 10 and 1000 times.

Out of curiosity I decided to give ojalgo and MTJ a chance. I had read benchmarks (here about jblas and here the java matrix benchmark) where those libraries performed really well.

On my core i5 laptop under the latest 64bit JVM (Windows 7), I found out that for the 100x100 case, Jama was actually 30% faster than MTJ, and ojalgo was more than 50% slower. I also found out that I did not like ojalgo API at all. I was quite disappointed by those results.

So I tried the same test on a 6-core Phenom II (ubuntu 64bit), Jama was faster than MTJ by 0-10%. Ojalgo and ParallelColt were slower than Jama by more than 50% and 30%.

This does not mean that ojalgo and ParallelColt are so bad, maybe they behave much better than the simple Jama on large matrices. They also have more features, including sparse matrices. But Jama is quite a good choice for a default library, MTJ can also be a good choice, it can be faster and use less memory because most methods take the output matrix/vector as a parameter. Furthermore MTJ can use the native lapack and blas libraries for improved performance. The bigger the matrices, the most difference it will make.

RunJamaMTJMTJ native
10.1600.2400.140
20.0860.2000.220
100.0830.0890.056
(On a Phenom II under Ubuntu 10.10 64-bit)

Thursday, August 12, 2010

Java enum Is Evil

Before Java 1.5, I never really complained about the lack of enum keyword. Sure the old enum via class pattern was a bit verbose at first (N.B.: Java 1.5 enums can also be verbose once you start adding methods to them). But more importantly, you would often use the table lookup pattern in combination.

The problem with Java 1.5 enum is that it is not Object-Oriented. You can't extend an enum, you can't add an element in an existing enum. Many will say "but that's what enum is for, a static list of things". In my experience,  the list of things often changes with time, or needs to be extended at one point. Furthermore, most people (including me when I am very lazy) end up writing switch statements on enum values. Enum promotes bad programming practices.

Think twice about using enum, this is often not what you want.

Java enum Is Evil

Before Java 1.5, I never really complained about the lack of enum keyword. Sure the old enum via class pattern was a bit verbose at first (N.B.: Java 1.5 enums can also be verbose once you start adding methods to them). But more importantly, you would often use the table lookup pattern in combination.

The problem with Java 1.5 enum is that it is not Object-Oriented. You can't extend an enum, you can't add an element in an existing enum. Many will say "but that's what enum is for, a static list of things". In my experience,  the list of things often changes with time, or needs to be extended at one point. Furthermore, most people (including me when I am very lazy) end up writing switch statements on enum values. Enum promotes bad programming practices.

Think twice about using enum, this is often not what you want.

Saturday, August 07, 2010

A Very Interesting Feature of Scala

I tried Scala a few years ago. There are several good ideas in it, but I found the language to be a bit too complicated to master. But I recently stubbled upon a paper on Scala generics that might change my mind about using Scala.

Scala Generics used to work in a similar way as Java Generics: via type erasure. One main reason is compatibility with Java, another is that C++ like templates make the code base blow up. Scala Generics offered some additional behavior (the variance/covariance notion). C++ templates, however, have some very interesting aspects: one is that everything is done at compile time, the other is  performance. If the generics are involved in any kind of computation intensive task, all the Java type conversion will create a significant overhead.

Now Scala has @specialized (since Scala 2.8). Annotating a generic type with @specialized will generate code. One has the choice to accept the performance penalty or to get all the performance but accept the code blow up. I think this is very useful.

If you read the paper you will see that the performance implications of this are not always small.

UPDATE: I thank the readers for pointing that this work only with primitive types to avoid autoboxing. It is still valuable but less than I first thought.

A Very Interesting Feature of Scala

I tried Scala a few years ago. There are several good ideas in it, but I found the language to be a bit too complicated to master. But I recently stubbled upon a paper on Scala generics that might change my mind about using Scala.

Scala Generics used to work in a similar way as Java Generics: via type erasure. One main reason is compatibility with Java, another is that C++ like templates make the code base blow up. Scala Generics offered some additional behavior (the variance/covariance notion). C++ templates, however, have some very interesting aspects: one is that everything is done at compile time, the other is  performance. If the generics are involved in any kind of computation intensive task, all the Java type conversion will create a significant overhead.

Now Scala has @specialized (since Scala 2.8). Annotating a generic type with @specialized will generate code. One has the choice to accept the performance penalty or to get all the performance but accept the code blow up. I think this is very useful.

If you read the paper you will see that the performance implications of this are not always small.

UPDATE: I thank the readers for pointing that this work only with primitive types to avoid autoboxing. It is still valuable but less than I first thought.

Wednesday, July 28, 2010

Street Fighting Mathematics Book

The MIT has a downloadable book on basic mathematics: Street Fighting Mathematics. I liked the part focused on the geometrical approach. It reminded me of the early greek mathematics.

Overall it does look like a very American approach to Maths: answering a multiple choices questions test by elimination. But it is still an interesting book.

Street Fighting Mathematics Book

The MIT has a downloadable book on basic mathematics: Street Fighting Mathematics. I liked the part focused on the geometrical approach. It reminded me of the early greek mathematics.

Overall it does look like a very American approach to Maths: answering a multiple choices questions test by elimination. But it is still an interesting book.

Wednesday, July 21, 2010

Bye Bye Firefox

I have been a long user of Firefox, mostly thanks to the adblock extension. But recently, Firefox decided to change the way arrows work on the web pages, they don't make the page scroll anymore. Meanwhile Chrome has now a good adblock plugin (that filters ads on load, not after load like it use to be) and is really much much faster than Firefox. So there is no more reason not to use it.

Hello Chrome, bye bye Firefox. Google has won the web browsers war.

Bye Bye Firefox

I have been a long user of Firefox, mostly thanks to the adblock extension. But recently, Firefox decided to change the way arrows work on the web pages, they don't make the page scroll anymore. Meanwhile Chrome has now a good adblock plugin (that filters ads on load, not after load like it use to be) and is really much much faster than Firefox. So there is no more reason not to use it.

Hello Chrome, bye bye Firefox. Google has won the web browsers war.

Wednesday, June 09, 2010

Diffusion Limited Aggregation Applet

Yes, I wrote an applet. I know it is very 1990s but, amazingly, it still does the job quite well. Ok, next time I should really use Flash to do this.


The Applet simulates Diffusion Limited Aggregation as described in Chaos And Fractals from Peitgen, Juergens, and Saupe. It represents ions randomly wandering around (in a Brownian motion) until they are caught by an attractive force in electrochemical deposition experiment. This kind of phenomenon occurs at all scales, for example it happens in the distribution of galaxies. You can play around with the applet at http://31416.appspot.com/dla.vm

Diffusion Limited Aggregation Applet

Yes, I wrote an applet. I know it is very 1990s but, amazingly, it still does the job quite well. Ok, next time I should really use Flash to do this.


The Applet simulates Diffusion Limited Aggregation as described in Chaos And Fractals from Peitgen, Juergens, and Saupe. It represents ions randomly wandering around (in a Brownian motion) until they are caught by an attractive force in electrochemical deposition experiment. This kind of phenomenon occurs at all scales, for example it happens in the distribution of galaxies. You can play around with the applet at http://31416.appspot.com/dla.vm