Friday, December 21, 2012

Finite Difference Approximation of Derivatives

A while ago, someone asked me to reference him in a paper of mine because I used formulas of a finite difference approximation of a derivative on a non uniform grid. I was shocked as those formula are very widespread (in countless papers, courses and books) and not far off elementary mathematics.

There are however some interesting old papers on the technique. Usually people approximate the first derivative by the central approximation of second order:

$$ f'(x) = \frac{f(x_{i+1})-f(x_{i-1})}{x_{i+1} - x_{i-1}} $$
However there are some other possibilities. For example one can find a formula directly out of the Taylor expansions of f(x(i+1)) and f(x(i-1)). This paper and that one seems to indicate it is more precise, especially when the grid does not vary smoothly (a typical example is uniform by parts).

This can make a big difference in practice, here is the example of a Bond priced under the Cox-Ingersoll-Ross model by finite differences. EULER is the classic central approximation, EULER1 uses the more refined approximation based on Taylor expansion, EULER2 uses Taylor expansion approximation as well as a higher order boundary condition. I used the same parameters as in the Tavella-Randall book example and a uniform grid between [0,0.2] except that I have added 2 points at the far end at 0.5 and 1.0. So the only difference between EULER and EULER1 lies in the computation of derivatives at the 3 last points.
I also computed the backward 2nd order first derivative on a non uniform grid (for the refined boundary). I was surprised not to find this easily on the web, so here it is:
$$ f'(x_i) = \left(\frac{1}{h_i}+\frac{1}{h_i+h_{i-1}}\right) f(x_i)- \left(\frac{1}{h_{i-1}}+\frac{1}{h_i}\right) f(x_{i-1})+ \left(\frac{1}{h_{i-1}} - \frac{1}{h_i+h_{i-1}} \right) f(x_{i-2}) + ...$$
Incidently while writing this post I found out it was a pain to write Math in HTML (I initially used a picture). MathML seems a bit crazy, I wonder why they couldn't just use the LaTeX standard. Update January 3rd 2013 - I now use Mathjax. It's not very good solution as I think this should typically be handled by the browser directly instead of huge javascript library, but it looks a bit better

Finite Difference Approximation of Derivatives

A while ago, someone asked me to reference him in a paper of mine because I used formulas of a finite difference approximation of a derivative on a non uniform grid. I was shocked as those formula are very widespread (in countless papers, courses and books) and not far off elementary mathematics.

There are however some interesting old papers on the technique. Usually people approximate the first derivative by the central approximation of second order:

$$ f'(x) = \frac{f(x_{i+1})-f(x_{i-1})}{x_{i+1} - x_{i-1}} $$
However there are some other possibilities. For example one can find a formula directly out of the Taylor expansions of f(x(i+1)) and f(x(i-1)). This paper and that one seems to indicate it is more precise, especially when the grid does not vary smoothly (a typical example is uniform by parts).

This can make a big difference in practice, here is the example of a Bond priced under the Cox-Ingersoll-Ross model by finite differences. EULER is the classic central approximation, EULER1 uses the more refined approximation based on Taylor expansion, EULER2 uses Taylor expansion approximation as well as a higher order boundary condition. I used the same parameters as in the Tavella-Randall book example and a uniform grid between [0,0.2] except that I have added 2 points at the far end at 0.5 and 1.0. So the only difference between EULER and EULER1 lies in the computation of derivatives at the 3 last points.
I also computed the backward 2nd order first derivative on a non uniform grid (for the refined boundary). I was surprised not to find this easily on the web, so here it is:
$$ f'(x_i) = \left(\frac{1}{h_i}+\frac{1}{h_i+h_{i-1}}\right) f(x_i)- \left(\frac{1}{h_{i-1}}+\frac{1}{h_i}\right) f(x_{i-1})+ \left(\frac{1}{h_{i-1}} - \frac{1}{h_i+h_{i-1}} \right) f(x_{i-2}) + ...$$
Incidently while writing this post I found out it was a pain to write Math in HTML (I initially used a picture). MathML seems a bit crazy, I wonder why they couldn't just use the LaTeX standard. Update January 3rd 2013 - I now use Mathjax. It's not very good solution as I think this should typically be handled by the browser directly instead of huge javascript library, but it looks a bit better

Wednesday, December 12, 2012

A Discontinuity

I am comparing various finite difference schemes on simple problems and am currently stumbling upon a strange discontinuity at the boundary for some of the schemes (Crank-Nicolson, Rannacher, and TR-BDF2) when I plot an American Put Option Gamma using a log grid. It actually is more pronounced with some values of the strike, not all. The amplitude oscillates with the strike. And it does not happen on a European Put, so it's not a boundary approximation error in the code. It might well be due to the nature of the scheme as schemes based on implicit Euler work (maybe monotonicity preservation is important). This appears on this graph around S=350.


Update December 13, 2012: after a close look at what was happening. It was after all a boundary issue. It's more visible on the American because the Gamma is more spread out. But I reproduced it on a European as well.

A Discontinuity

I am comparing various finite difference schemes on simple problems and am currently stumbling upon a strange discontinuity at the boundary for some of the schemes (Crank-Nicolson, Rannacher, and TR-BDF2) when I plot an American Put Option Gamma using a log grid. It actually is more pronounced with some values of the strike, not all. The amplitude oscillates with the strike. And it does not happen on a European Put, so it's not a boundary approximation error in the code. It might well be due to the nature of the scheme as schemes based on implicit Euler work (maybe monotonicity preservation is important). This appears on this graph around S=350.


Update December 13, 2012: after a close look at what was happening. It was after all a boundary issue. It's more visible on the American because the Gamma is more spread out. But I reproduced it on a European as well.

Scala is Mad

I spent quick a bit of time to figure out why something that is usually simple to do in Java did not work in Scala: Arrays and ArrayLists with generics.

For some technical reason (type erasure at the JVM level), Array sometimes need a parameter with a ClassManifest !?! a generic type like [T :< Point : ClassManifest] need to be declared instead of simply [T :< Point].

And then the quickSort method somehow does not work if invoked on a generic... like quickSort(points) where points: Array[T]. I could not figure out yet how to do this one, I just casted to points.asInstanceOf[Array[Point]], quite ugly.

In contrast I did not even have to think much to write the Java equivalent. Generics in Scala, while having a nice syntax, are just crazy. This is something that goes beyond generics. Some of the Scala library and syntax is nice, but overall, the IDE integration is still very buggy, and productivity is not higher.

Update Dec 12 2012: here is the actual code (this is kept close to the Java equivalent on purpose):
object Point {
def sortAndRemoveIdenticalPoints[T <: Point : ClassManifest](points : Array[T]) : Array[T] = {
Sorting.quickSort(points.asInstanceOf[Array[Point]])
val l = new ArrayBuffer[T](points.length)
var previous = points(0)
l += points(0)
for (i <- 1 until points.length) {
if(math.abs(points(i).value - previous.value)< Epsilon.MACHINE_EPSILON_SQRT) {
l += points(i)
}
}
return l.toArray
}
return points
}
}

class Point(val value: Double, val isMiddle: Boolean) extends Ordered[Point] {
def compare(that: Point): Int = {
return math.signum(this.value - that.value).toInt
}
}
In Java one can just use Arrays.sort(points) if points is a T[]. And the method can work with a subclass of Point.

Scala is Mad

I spent quick a bit of time to figure out why something that is usually simple to do in Java did not work in Scala: Arrays and ArrayLists with generics.

For some technical reason (type erasure at the JVM level), Array sometimes need a parameter with a ClassManifest !?! a generic type like [T :< Point : ClassManifest] need to be declared instead of simply [T :< Point].

And then the quickSort method somehow does not work if invoked on a generic... like quickSort(points) where points: Array[T]. I could not figure out yet how to do this one, I just casted to points.asInstanceOf[Array[Point]], quite ugly.

In contrast I did not even have to think much to write the Java equivalent. Generics in Scala, while having a nice syntax, are just crazy. This is something that goes beyond generics. Some of the Scala library and syntax is nice, but overall, the IDE integration is still very buggy, and productivity is not higher.

Update Dec 12 2012: here is the actual code (this is kept close to the Java equivalent on purpose):
object Point {
  def sortAndRemoveIdenticalPoints[T <: Point : ClassManifest](points : Array[T]) : Array[T] = {
      Sorting.quickSort(points.asInstanceOf[Array[Point]])
      val l = new ArrayBuffer[T](points.length)
      var previous = points(0)
      l += points(0)
      for (i <- 1 until points.length) {
        if(math.abs(points(i).value - previous.value)< Epsilon.MACHINE_EPSILON_SQRT) {
          l += points(i)
        }
      }
      return l.toArray
    }
    return points
  }
}

class Point(val value: Double, val isMiddle: Boolean) extends Ordered[Point] {
  def compare(that: Point): Int = {
    return math.signum(this.value - that.value).toInt
  }
}
In Java one can just use Arrays.sort(points) if points is a T[]. And the method can work with a subclass of Point.

Thursday, November 29, 2012

Local Volatility Delta & Dynamic

This will be a very technical post, I am not sure that it will be very understandable by people not familiar with the implied volatility surface.

Something one notices when computing an option price under local volatility using a PDE solver, is how different is the Delta from the standard Black-Scholes Delta, even though the price will be very close for a Vanilla option. In deed, the Finite difference grid will have a different local volatility at each point and the Delta will take into account a change in local volatility as well.

But this finite-difference grid Delta is also different from a standard numerical Delta where one just move the initial spot up and down, and takes the difference of computed prices. The numerical Delta will eventually include a change in implied volatility, depending if the surface is sticky-strike (vol will stay constant) or sticky-delta (vol will change). So the numerical Delta produced with a sticky-strike surface will be the same as the standard Black-Scholes Delta. In reality, what happens is that the local volatility is different when the spot moves up, if we recompute it: it is not static. The finite difference solver computes Delta with a static local volatility. If we call twice the finite difference solver with a different initial spot, we will reproduce the correct Delta, that takes into account the dynamic of the implied volatility surface.

Here how different it can be if the delta is computed from the grid (static local volatility) or numerically (dynamic local volatility) on an exotic trade:


This is often why people assume the local volatility model is wrong, not consistent. It is wrong if we consider the local volatility surface as static to compute hedges.

Local Volatility Delta & Dynamic

This will be a very technical post, I am not sure that it will be very understandable by people not familiar with the implied volatility surface.

Something one notices when computing an option price under local volatility using a PDE solver, is how different is the Delta from the standard Black-Scholes Delta, even though the price will be very close for a Vanilla option. In deed, the Finite difference grid will have a different local volatility at each point and the Delta will take into account a change in local volatility as well.

But this finite-difference grid Delta is also different from a standard numerical Delta where one just move the initial spot up and down, and takes the difference of computed prices. The numerical Delta will eventually include a change in implied volatility, depending if the surface is sticky-strike (vol will stay constant) or sticky-delta (vol will change). So the numerical Delta produced with a sticky-strike surface will be the same as the standard Black-Scholes Delta. In reality, what happens is that the local volatility is different when the spot moves up, if we recompute it: it is not static. The finite difference solver computes Delta with a static local volatility. If we call twice the finite difference solver with a different initial spot, we will reproduce the correct Delta, that takes into account the dynamic of the implied volatility surface.

Here how different it can be if the delta is computed from the grid (static local volatility) or numerically (dynamic local volatility) on an exotic trade:


This is often why people assume the local volatility model is wrong, not consistent. It is wrong if we consider the local volatility surface as static to compute hedges.

Wednesday, November 14, 2012

OpenSuse 12.2

After too many upgrades of Ubuntu, and switching from Gnome to KDE and back, my Ubuntu system became behaving strangely in KDE: authorization issues, frequent crashes, pulseaudio & ardour problems. I decided to give another try to OpenSuse, as Linux makes it easy to switch system without losing too much time reinstalling the useful applications.

It's been only a few days, but I am pleasantly surprised with OpenSuse. It feels more polished than Kubuntu. I could not point out to a specific feature, but so far I have not had to fiddle with any configuration file, everything works well out of the box. Somehow Kubuntu always felt flaky, read to break at any moment, while OpenSuse feels solid. But they should consider changing the default font settings in KDE to take advantage properly of antialiasing and pretty fonts (it's only a few clicks away, but still the default is not the prettiest).

OpenSuse 12.2

After too many upgrades of Ubuntu, and switching from Gnome to KDE and back, my Ubuntu system became behaving strangely in KDE: authorization issues, frequent crashes, pulseaudio & ardour problems. I decided to give another try to OpenSuse, as Linux makes it easy to switch system without losing too much time reinstalling the useful applications.

It's been only a few days, but I am pleasantly surprised with OpenSuse. It feels more polished than Kubuntu. I could not point out to a specific feature, but so far I have not had to fiddle with any configuration file, everything works well out of the box. Somehow Kubuntu always felt flaky, read to break at any moment, while OpenSuse feels solid. But they should consider changing the default font settings in KDE to take advantage properly of antialiasing and pretty fonts (it's only a few clicks away, but still the default is not the prettiest).

Monday, October 15, 2012

GPU computing in Finance

Very interesting presentation from Murex about their GPU computing. Some points were:
- GPU demand for mostly exotics pricing & greeks
- Local vol main model for EQD exotics. Local vol calibrated via PDE approach.
- Markov functional model becoming main model for IRD.
- Use of local regression instead of Longstaff Schwartz (or worse CVA like sim of sim).
- philox RNG from DE Shaw. But the presenter does not seem to know RNGs very well (recommended Brownian Bridge for Mersenne Twister!).
- An important advantage of GPU is latency. Grid computing only improves throughput but not latency. GPU improves both.

http://nvidia.fullviewmedia.com/gtc2010/0923-a7-2032.html

GPU computing in Finance

Very interesting presentation from Murex about their GPU computing. Some points were:
- GPU demand for mostly exotics pricing & greeks
- Local vol main model for EQD exotics. Local vol calibrated via PDE approach.
- Markov functional model becoming main model for IRD.
- Use of local regression instead of Longstaff Schwartz (or worse CVA like sim of sim).
- philox RNG from DE Shaw. But the presenter does not seem to know RNGs very well (recommended Brownian Bridge for Mersenne Twister!).
- An important advantage of GPU is latency. Grid computing only improves throughput but not latency. GPU improves both.

http://nvidia.fullviewmedia.com/gtc2010/0923-a7-2032.html

Wednesday, September 12, 2012

Pretty Fonts in Chrome with Linux

It's a bit incredible, but in 2012, some linux distros (like Fedora, or Kubuntu) still have trouble to have pretty fonts everywhere. I found a nice tip initially for Google Chrome but that seems to improve more than Chrome: create ~/.fonts.conf with the following:

[match target="font"]
    [edit name="autohint" mode="assign"]
      [bool]true[/bool]
    [/edit]
    [edit name="hinting" mode="assign"]
      [bool]true[/bool]
    [/edit]
    [edit mode="assign" name="hintstyle"]
      [const]hintslight[/const]
    [/edit]
[/match]

replace [ and ] with brackets < and >

Update from 2013 - This can be done system wide, see http://chasethedevil.blogspot.com/2013/08/better-fonts-in-fedora-than-in-ubuntu.html

Pretty Fonts in Chrome with Linux

It's a bit incredible, but in 2012, some linux distros (like Fedora, or Kubuntu) still have trouble to have pretty fonts everywhere. I found a nice tip initially for Google Chrome but that seems to improve more than Chrome: create ~/.fonts.conf with the following:

[match target="font"]
    [edit name="autohint" mode="assign"]
      [bool]true[/bool]
    [/edit]
    [edit name="hinting" mode="assign"]
      [bool]true[/bool]
    [/edit]
    [edit mode="assign" name="hintstyle"]
      [const]hintslight[/const]
    [/edit]
[/match]

replace [ and ] with brackets < and >

Update from 2013 - This can be done system wide, see http://chasethedevil.blogspot.com/2013/08/better-fonts-in-fedora-than-in-ubuntu.html

Fedora 17 vs Ubuntu 12.04

I had the bad idea to upgrade to the beta Ubuntu 12.10. Something awfully broke in the upgrade. After too much struggling with apt-get & dpkg, I decided to install Fedora 17.

Strangely Fedora feels much faster than Ubuntu 12.04 (the boot time especially). Yum seems also faster than apt-get, especially the update part. Also while the Unity dock is not bad (better than gnome shell dock), the Unity dash thing just makes me crazy, the gnome shell activities, while close, are much easier to use.

But it needs a bit more steps to install, although I had no problem to install nvidia driversOracle 11g xe, Java, MP3, nice fonts, nice icons thanks to the guides here and there. SSD Trim instructions are the same (basically use "discard" instead of "default" in /etc/fstab). I have the most troubles with Oracle: somehow the start script does not work and I currently log in as "oracle" and start /u01/app/oracle/product/11.2.0/xe/config/scripts/startdb.sh from there (after having added the proper .bashrc for this user)

I even managed the peculiarities of my laptop a similar way as ubuntu: I want Nvidia card when plugged in to a monitor (to be able to use it) and Intel card when not (to be able to use the LCD screen). My solution is to use the boot screen (in reality one just need to restart X11): this amounted to add a "hdmi" in "/etc/grub.d/40_custom" and creating a link to "/etc/rc3.d/S10DriverSelect" the following script (a dirty hack):


#!/bin/sh
if grep -q hdmi /proc/cmdline
then
  if [ -d /usr/lib64/xorg/modules.bak/extensions/nvidia ];
  then 
    cp -f /etc/X11/xorg.conf.hdmi /etc/X11/xorg.conf
    mv /usr/lib64/xorg/modules.bak/extensions/nvidia /usr/lib64/xorg/modules/extensions/
  fi
else
  if [ -d /usr/lib64/xorg/modules/extensions/nvidia ];
  then
    cp -f /etc/X11/xorg.conf.intel /etc/X11/xorg.conf
    mv /usr/lib64/xorg/modules/extensions/nvidia /usr/lib64/xorg/modules.bak/extensions/
  fi
fi

Linux distros are really becoming closer in terms of configuration, LSB has made great progress.

Edit from November 29: After a few weeks, I noticed that the system was quite unstable unfortunately. As a result, I moved back to Ubuntu on my laptop. I am running OpenSuse on my home computer. 1 year later, I am back to Fedora 19, 20 on my desktop - no stability issue, I prefer Gnome over KDE.

Fedora 17 vs Ubuntu 12.04

I had the bad idea to upgrade to the beta Ubuntu 12.10. Something awfully broke in the upgrade. After too much struggling with apt-get & dpkg, I decided to install Fedora 17.

Strangely Fedora feels much faster than Ubuntu 12.04 (the boot time especially). Yum seems also faster than apt-get, especially the update part. Also while the Unity dock is not bad (better than gnome shell dock), the Unity dash thing just makes me crazy, the gnome shell activities, while close, are much easier to use.

But it needs a bit more steps to install, although I had no problem to install nvidia driversOracle 11g xe, Java, MP3, nice fonts, nice icons thanks to the guides here and there. SSD Trim instructions are the same (basically use "discard" instead of "default" in /etc/fstab). I have the most troubles with Oracle: somehow the start script does not work and I currently log in as "oracle" and start /u01/app/oracle/product/11.2.0/xe/config/scripts/startdb.sh from there (after having added the proper .bashrc for this user)

I even managed the peculiarities of my laptop a similar way as ubuntu: I want Nvidia card when plugged in to a monitor (to be able to use it) and Intel card when not (to be able to use the LCD screen). My solution is to use the boot screen (in reality one just need to restart X11): this amounted to add a "hdmi" in "/etc/grub.d/40_custom" and creating a link to "/etc/rc3.d/S10DriverSelect" the following script (a dirty hack):


#!/bin/sh
if grep -q hdmi /proc/cmdline
then
  if [ -d /usr/lib64/xorg/modules.bak/extensions/nvidia ];
  then 
    cp -f /etc/X11/xorg.conf.hdmi /etc/X11/xorg.conf
    mv /usr/lib64/xorg/modules.bak/extensions/nvidia /usr/lib64/xorg/modules/extensions/
  fi
else
  if [ -d /usr/lib64/xorg/modules/extensions/nvidia ];
  then
    cp -f /etc/X11/xorg.conf.intel /etc/X11/xorg.conf
    mv /usr/lib64/xorg/modules/extensions/nvidia /usr/lib64/xorg/modules.bak/extensions/
  fi
fi

Linux distros are really becoming closer in terms of configuration, LSB has made great progress.

Edit from November 29: After a few weeks, I noticed that the system was quite unstable unfortunately. As a result, I moved back to Ubuntu on my laptop. I am running OpenSuse on my home computer. 1 year later, I am back to Fedora 19, 20 on my desktop - no stability issue, I prefer Gnome over KDE.

Friday, September 07, 2012

Binary Voting

How many reports have you had to fill up with a number of stars to choose? How much useless time is spent on figuring the this number just because it is always very ambiguous?

Some blogger wrote an interesting entry on Why I Hate Five Stars Reviews. Basically he advocates binary voting instead via like/dislike. Maybe a ternary system via like/dislike/don't care would be ok too.

One coworker used to advocate the same for a similar reason: people reading those reports only pay attention to the extremes: the 5 stars or the 0 stars. So if you want to have a voice, you need to express it via 5 or 0, nothing in between.


Binary Voting

How many reports have you had to fill up with a number of stars to choose? How much useless time is spent on figuring the this number just because it is always very ambiguous?

Some blogger wrote an interesting entry on Why I Hate Five Stars Reviews. Basically he advocates binary voting instead via like/dislike. Maybe a ternary system via like/dislike/don't care would be ok too.

One coworker used to advocate the same for a similar reason: people reading those reports only pay attention to the extremes: the 5 stars or the 0 stars. So if you want to have a voice, you need to express it via 5 or 0, nothing in between.


Tuesday, August 21, 2012

Moving The Needle

These days the expression "move the needle" is popular where I work. I did not know it was an expression of Steve Jobs.

“The company starts valuing the great salesmen, because they’re the ones who can move the needle on revenues, not the product engineers and designers. So the salespeople end up running the company.… [Then] the product guys don’t matter so much, and a lot of them just turn off. It happened at Apple when [John] Sculley came in, which was my fault, and it happened when Ballmer took over at Microsoft. Apple was lucky and it rebounded, but I don’t think anything will change at Microsoft as long as Ballmer is running it.”


This is from the biography, just saw that in an interesting article about Microsoft problems: 

http://www.vanityfair.com/business/2012/08/microsoft-lost-mojo-steve-ballmer?mbid=social_retweet

I would not take those words literally: I have seen a company with the inverse problem: developping technical stuff for the sake of it, without a connection to what the market (or the users) are really after. 

In the case of Apple, the engineers and designers actually know quite well what the market is after, maybe more so than the salespeople. But it is unfortunately not the case in every company. Still the case of people turning off because it is too hard to convince the hierarchy is probably quite common.

Moving The Needle

These days the expression "move the needle" is popular where I work. I did not know it was an expression of Steve Jobs.

“The company starts valuing the great salesmen, because they’re the ones who can move the needle on revenues, not the product engineers and designers. So the salespeople end up running the company.… [Then] the product guys don’t matter so much, and a lot of them just turn off. It happened at Apple when [John] Sculley came in, which was my fault, and it happened when Ballmer took over at Microsoft. Apple was lucky and it rebounded, but I don’t think anything will change at Microsoft as long as Ballmer is running it.”


This is from the biography, just saw that in an interesting article about Microsoft problems: 

http://www.vanityfair.com/business/2012/08/microsoft-lost-mojo-steve-ballmer?mbid=social_retweet

I would not take those words literally: I have seen a company with the inverse problem: developping technical stuff for the sake of it, without a connection to what the market (or the users) are really after. 

In the case of Apple, the engineers and designers actually know quite well what the market is after, maybe more so than the salespeople. But it is unfortunately not the case in every company. Still the case of people turning off because it is too hard to convince the hierarchy is probably quite common.

When solar panels don't work

I thought I would add another word about keyboard trends. A coworker has bought the Logitech K750, the one with solar panels to recharge the battery. This keyboard has excellent reviews on many websites, or even on Amazon. I somehow always found the idea a bit strange, it looked like the old solar panel calculators that used to be trendy when I was in primary school.


Well after maybe 6 months of use, he needs to change the battery! It sounds like the solar panels were just a marketing plot after all.

When solar panels don't work

I thought I would add another word about keyboard trends. A coworker has bought the Logitech K750, the one with solar panels to recharge the battery. This keyboard has excellent reviews on many websites, or even on Amazon. I somehow always found the idea a bit strange, it looked like the old solar panel calculators that used to be trendy when I was in primary school.


Well after maybe 6 months of use, he needs to change the battery! It sounds like the solar panels were just a marketing plot after all.

Tuesday, July 31, 2012

Excel Bulk Entry of Jira using Apache HttpClient & POI

Where I work, I have to regularly enter my time in JIRA using their crappy portlet interface. Because of French regulations and bad design, one can enter time for at most 1 day at a time. This is very annoying especially to enter vacation days. I decided to spend some time (took me around 2 hours - I thought it would be much more) to enter the time from a local Excel spreadsheet (via with OpenOffice), and use Java to populate JIRA.

First I had to find out what where the relevant requests. Firefox has several extensions for that, but I found Tamper Data to be the easiest to work with (hint: use copy/paste in the Tamper Data window to get the full request in a nice format).

Apache HttpClient provides an easy way to do HTTP requests and handles cookies almost automatically in Java. Here is the login phase:

List<NameValuePair> formparams = new ArrayList<NameValuePair>();
formparams.add(new BasicNameValuePair("os_username", "mouse@thecat"));
formparams.add(new BasicNameValuePair("os_password", "DEADDEAD"));
UrlEncodedFormEntity entity = new UrlEncodedFormEntity(formparams, "UTF-8");
HttpPost httppost = new HttpPost("https://jira.calypso.com/rest/gadget/1.0/login");
httppost.setEntity(entity);
DefaultHttpClient httpclient = new DefaultHttpClient();
CookieStore cookieStore = new BasicCookieStore();
httpclient.setCookieStore(cookieStore);
ResponseHandler<byte[]> handler = new ResponseHandler[]>() {
public byte[] handleResponse(HttpResponse response)
throws ClientProtocolException, IOException {
System.out.println("<-" + response.getStatusLine());
HttpEntity entity = response.getEntity();
if (entity != null) {
return EntityUtils.toByteArray(entity);
} else {
return null;
}
}
};
System.out.println("->" + httppost.getURI());
byte[] response = httpclient.execute(httppost, handler);


Then a request to our JIRA portlet looks like:


formparams = new ArrayList<NameValuePair>();
formparams.add(new BasicNameValuePair("inline", "true"));
formparams.add(new BasicNameValuePair("decorator", "dialog"));
formparams.add(new BasicNameValuePair("startDate", startDate));
formparams.add(new BasicNameValuePair("timeLogged", timeLogged));
formparams.add(new BasicNameValuePair("id", id));
formparams.add(new BasicNameValuePair("adjustEstimate", "auto"));
entity = new UrlEncodedFormEntity(formparams, "UTF-8");
httppost = new HttpPost("https://jira.calypso.com/secure/CreateWorklog.jspa");
httppost.addHeader("Referer", "https://jira.calypso.com/browse/"+ jiraCAL);
httppost.setEntity(entity);
System.out.println("->" + httppost.getURI());
response = httpclient.execute(httppost, handler);


Parsing Excel with Apache POI is a bit annoying, but I kept fixed conventions to make things simple:


InputStream inp = new FileInputStream(file);
HSSFWorkbook wb = new HSSFWorkbook(new POIFSFileSystem(inp));
List list = new ArrayList();
HSSFSheet sheet = wb.getSheetAt(0);
boolean isEmpty = false;
int i = 0;
while (!isEmpty) {
HSSFRow row = sheet.getRow(i);
if (row == null) { isEmpty=true; break;}

HSSFCell dateCell = row.getCell(0);
HSSFCell calCell = row.getCell(1);
HSSFCell idCell = row.getCell(2);
HSSFCell percentCell = row.getCell(3);
if (dateCell == null) {
isEmpty = true;
} else if (dateCell.getCellType() == HSSFCell.CELL_TYPE_NUMERIC && calCell != null){
TimeLine timeLine = new TimeLine();
timeLine.date = HSSFDateUtil.getJavaDate(dateCell.getNumericCellValue());
if (timeLine.date.after(startDate)
&& timeLine.date.before(endDate)) {
timeLine.jiraCAL = calCell.getStringCellValue();
if (timeLine.jiraCAL != null && timeLine.jiraCAL.length() > 0) {
timeLine.id = Integer.toString((int)idCell.getNumericCellValue());
timeLine.percent = Integer.toString((int)percentCell.getNumericCellValue());
list.add(timeLine);
}
}
}
i++;
}


Obviously, this is not clean code, the goal was only to do something quick and dirty to solve my immediate problem.

Excel Bulk Entry of Jira using Apache HttpClient & POI

Where I work, I have to regularly enter my time in JIRA using their crappy portlet interface. Because of French regulations and bad design, one can enter time for at most 1 day at a time. This is very annoying especially to enter vacation days. I decided to spend some time (took me around 2 hours - I thought it would be much more) to enter the time from a local Excel spreadsheet (via with OpenOffice), and use Java to populate JIRA.

First I had to find out what where the relevant requests. Firefox has several extensions for that, but I found Tamper Data to be the easiest to work with (hint: use copy/paste in the Tamper Data window to get the full request in a nice format).

Apache HttpClient provides an easy way to do HTTP requests and handles cookies almost automatically in Java. Here is the login phase:

List<NameValuePair> formparams = new ArrayList<NameValuePair>();
formparams.add(new BasicNameValuePair("os_username", "mouse@thecat"));
formparams.add(new BasicNameValuePair("os_password", "DEADDEAD"));
UrlEncodedFormEntity entity = new UrlEncodedFormEntity(formparams, "UTF-8");
HttpPost httppost = new HttpPost("https://jira.calypso.com/rest/gadget/1.0/login");
httppost.setEntity(entity);
DefaultHttpClient httpclient = new DefaultHttpClient();
CookieStore cookieStore = new BasicCookieStore();
httpclient.setCookieStore(cookieStore);
ResponseHandler<byte[]> handler = new ResponseHandler[]>() {
	public byte[] handleResponse(HttpResponse response)
			throws ClientProtocolException, IOException {
		System.out.println("<-" + response.getStatusLine());
		HttpEntity entity = response.getEntity();
		if (entity != null) {
			return EntityUtils.toByteArray(entity);
		} else {
			return null;
		}
	}
};
System.out.println("->" + httppost.getURI());
byte[] response = httpclient.execute(httppost, handler); 


Then a request to our JIRA portlet looks like:


formparams = new ArrayList<NameValuePair>();
formparams.add(new BasicNameValuePair("inline", "true"));
formparams.add(new BasicNameValuePair("decorator", "dialog"));
formparams.add(new BasicNameValuePair("startDate", startDate));
formparams.add(new BasicNameValuePair("timeLogged", timeLogged));
formparams.add(new BasicNameValuePair("id", id));
formparams.add(new BasicNameValuePair("adjustEstimate", "auto"));
entity = new UrlEncodedFormEntity(formparams, "UTF-8");
httppost = new HttpPost("https://jira.calypso.com/secure/CreateWorklog.jspa");
httppost.addHeader("Referer", "https://jira.calypso.com/browse/"+ jiraCAL);
httppost.setEntity(entity);
System.out.println("->" + httppost.getURI());
response = httpclient.execute(httppost, handler);


Parsing Excel with Apache POI is a bit annoying, but I kept fixed conventions to make things simple:


InputStream inp = new FileInputStream(file);
HSSFWorkbook wb = new HSSFWorkbook(new POIFSFileSystem(inp));
List list = new ArrayList();
HSSFSheet sheet = wb.getSheetAt(0);
boolean isEmpty = false;
int i = 0;
while (!isEmpty) {
	HSSFRow row = sheet.getRow(i);
	if (row == null) { isEmpty=true; break;}
	
	HSSFCell dateCell = row.getCell(0);
	HSSFCell calCell = row.getCell(1);
	HSSFCell idCell = row.getCell(2);
	HSSFCell percentCell = row.getCell(3);
	if (dateCell == null) {
		isEmpty = true;
	} else if (dateCell.getCellType() == HSSFCell.CELL_TYPE_NUMERIC && calCell != null){
		TimeLine timeLine = new TimeLine();
		timeLine.date = HSSFDateUtil.getJavaDate(dateCell.getNumericCellValue());
		if (timeLine.date.after(startDate)
				&& timeLine.date.before(endDate)) {
			timeLine.jiraCAL = calCell.getStringCellValue();
			if (timeLine.jiraCAL != null && timeLine.jiraCAL.length() > 0) {
				timeLine.id = Integer.toString((int)idCell.getNumericCellValue());
				timeLine.percent =  Integer.toString((int)percentCell.getNumericCellValue());
				list.add(timeLine);
			}
		}
	}
	i++;
}


Obviously, this is not clean code, the goal was only to do something quick and dirty to solve my immediate problem.

Monday, July 30, 2012

Keyboard Porn

I have been browsing the web, looking for a nice computer keyboard. Programming is a big part of my job, a comfortable keyboard is therefore important to my daily life.
I have some nostalgia for the original Microsoft natural keyboard, the white one with standard home and end keys. When it came out it looked revolutionary. I remember really improving my typing speed on it. The only minor annoyance was the heavy and loud space bar. It's sad that MS does not make those in azerty anymore (I know qwerty is better for programming - I used it while I was working in the US, but it's just too annoying to have a different layout from 95% of the people around). The new MS ergonomic model just looks ugly and scary with all those extra keys.
At home I have a Logitech K800, very practical as I often use it from the couch and at nighttime. When I tried it, I was impressed with the key feel: the action is a bit longer than a laptop keyboard (especially the Apple ones), and smooth. But one day, my son, by keeping on hammering on it with all the force of a 2 years old, damaged it. Some keys would print many times other letters. Still after unmounting some keys, and after washing the keyboard several times and waiting  a couple of weeks it started working again. But now the keys feel very mushy and not very nice to type on. This is obviously due to the strong abuse, and the fact that the back is flexible plastic probably did not help. And I wonder if a new K800 (or a Logitech Illuminated) would not just age the same way.


I looked and looked, read the forums. The hip keyboards seem to be the mechanical ones. It tempted me to get a WASD keyboard even though I never liked the feel and sound of my uncle's IBM model M keyboard.  WASD offer the combination of red cherry switches and o rings which should be significantly better for me.
Somehow despite all this research for a better keyboard, I kind of like my current work keyboard, which is just a cheap Microsoft Wired Keyboard 600. It does feel comfortable, and easy to type on. If it ages, it is just $17 to replace. I am less convinced that another keyboard would improve things significantly, I disliked the very clicky feeling of IBM model M and the old, not so comfortable (because relatively high), straight keyboards. I found also the Apple keyboards to be sadly a bit tiring with their very short action, I enjoyed their compactness but I suspect this is what made me place my hands in bad positions.

I manage to do around 70 wpm (words per minute) on the Microsoft Wired Keyboard 600, as well as on some very basic 4€ keyboard (old style slightly noisy chinese rubber dome Atlantis Land K2101), but on the Logitech K800, I only do 55 wpm. So there it is for the 80€ keyboard compared to the 4€ one. Those numbers are not very precise, I only tried a silly simple wpm website: http://www.typeonline.co.uk/typingspeed.php

I am still a bit curious about those MS comfort curve keyboards, they don't have great reviews but it might be a small improvement over the basic Wired Keyboard 600.

Keyboard Porn

I have been browsing the web, looking for a nice computer keyboard. Programming is a big part of my job, a comfortable keyboard is therefore important to my daily life.
I have some nostalgia for the original Microsoft natural keyboard, the white one with standard home and end keys. When it came out it looked revolutionary. I remember really improving my typing speed on it. The only minor annoyance was the heavy and loud space bar. It's sad that MS does not make those in azerty anymore (I know qwerty is better for programming - I used it while I was working in the US, but it's just too annoying to have a different layout from 95% of the people around). The new MS ergonomic model just looks ugly and scary with all those extra keys.
At home I have a Logitech K800, very practical as I often use it from the couch and at nighttime. When I tried it, I was impressed with the key feel: the action is a bit longer than a laptop keyboard (especially the Apple ones), and smooth. But one day, my son, by keeping on hammering on it with all the force of a 2 years old, damaged it. Some keys would print many times other letters. Still after unmounting some keys, and after washing the keyboard several times and waiting  a couple of weeks it started working again. But now the keys feel very mushy and not very nice to type on. This is obviously due to the strong abuse, and the fact that the back is flexible plastic probably did not help. And I wonder if a new K800 (or a Logitech Illuminated) would not just age the same way.


I looked and looked, read the forums. The hip keyboards seem to be the mechanical ones. It tempted me to get a WASD keyboard even though I never liked the feel and sound of my uncle's IBM model M keyboard.  WASD offer the combination of red cherry switches and o rings which should be significantly better for me.
Somehow despite all this research for a better keyboard, I kind of like my current work keyboard, which is just a cheap Microsoft Wired Keyboard 600. It does feel comfortable, and easy to type on. If it ages, it is just $17 to replace. I am less convinced that another keyboard would improve things significantly, I disliked the very clicky feeling of IBM model M and the old, not so comfortable (because relatively high), straight keyboards. I found also the Apple keyboards to be sadly a bit tiring with their very short action, I enjoyed their compactness but I suspect this is what made me place my hands in bad positions.

I manage to do around 70 wpm (words per minute) on the Microsoft Wired Keyboard 600, as well as on some very basic 4€ keyboard (old style slightly noisy chinese rubber dome Atlantis Land K2101), but on the Logitech K800, I only do 55 wpm. So there it is for the 80€ keyboard compared to the 4€ one. Those numbers are not very precise, I only tried a silly simple wpm website: http://www.typeonline.co.uk/typingspeed.php

I am still a bit curious about those MS comfort curve keyboards, they don't have great reviews but it might be a small improvement over the basic Wired Keyboard 600.

Monday, June 25, 2012

Adaptive Quadrature for Pricing European Option with Heston

The Quantlib code to evaluate the Heston integral for European options is quite nice. It proposes Kahl & Jaeckel method as well as Gatheral method for the complex logarithm. It also contains expansions where it matters so that the resulting code is very robust. One minor issue is that it does not integrate both parts at the same time, and also does not propose Attari method for the Heston integral that is supposed to be more stable.

I was surprised to find out that out of the money, short expiry options seemed badly mispriced. In the end I discovered it was just that it required sometimes more than 3500 function evaluations to have an accuracy of 1e-6.

As this sounds a bit crazy, I thought that Jaeckel log transform was the culprit. In reality, it turned out that it was Gauss Lobatto Gander & Gautschi implementation. I tried the simplest algorithm in Espelid improved algorithms: modsim, an adaptive extrapolated Simpson method, and it was 4x faster for the same accuracy. That plus the fact that it worked out of the box (translated to Java) on my problem was impressive.

Jaeckel log transform (to change the interval from 0,+inf to 0,1) works well, and seems to offer a slight speedup (10% to 15%) for around ATM options, mid to long term for the same accuracy. Unfortunately, it can also slow down by up to 50% the convergence for more OTM options or shorter expiries. So I am not so sure about its interest vs just cutting off the integration at phi=200.

Adaptive Quadrature for Pricing European Option with Heston

The Quantlib code to evaluate the Heston integral for European options is quite nice. It proposes Kahl & Jaeckel method as well as Gatheral method for the complex logarithm. It also contains expansions where it matters so that the resulting code is very robust. One minor issue is that it does not integrate both parts at the same time, and also does not propose Attari method for the Heston integral that is supposed to be more stable.

I was surprised to find out that out of the money, short expiry options seemed badly mispriced. In the end I discovered it was just that it required sometimes more than 3500 function evaluations to have an accuracy of 1e-6.

As this sounds a bit crazy, I thought that Jaeckel log transform was the culprit. In reality, it turned out that it was Gauss Lobatto Gander & Gautschi implementation. I tried the simplest algorithm in Espelid improved algorithms: modsim, an adaptive extrapolated Simpson method, and it was 4x faster for the same accuracy. That plus the fact that it worked out of the box (translated to Java) on my problem was impressive.

Jaeckel log transform (to change the interval from 0,+inf to 0,1) works well, and seems to offer a slight speedup (10% to 15%) for around ATM options, mid to long term for the same accuracy. Unfortunately, it can also slow down by up to 50% the convergence for more OTM options or shorter expiries. So I am not so sure about its interest vs just cutting off the integration at phi=200.

Thursday, June 14, 2012

Gnome Shell more stable than Unity on Ubuntu 12.04

Regularly, the unity dock made some applications inaccessible: clicking on the app icon did not show or start the app anymore, a very annoying bug. This is quite incredible given that this version of Ubuntu is supposed to be long term support. So I decided to give one more chance to Gnome Shell. Installing it on Ubuntu 12.04 is simple with this guide.

To my surprise it is very stable so far. Earlier Gnome Shell versions were not as stable. After installing various extensions (dock especially) it is as usable as Unity for my needs. It seems more responsive as well. I am not really into the Unity new features like HUD. It sounds to me like Ubuntu is making a mistake with Unity compared to Gnome Shell.

To make an old extension support latest Gnome Shell version, it is sometimes necessary  to update  the extension metadata with what's given by gnome-shell --version. For the weather extension you can just edit using gedit:

sudo gedit /usr/share/gnome-shell/extensions/weather@gnome-shell-extensions.gnome.org/metadata.json

Gnome Shell more stable than Unity on Ubuntu 12.04

Regularly, the unity dock made some applications inaccessible: clicking on the app icon did not show or start the app anymore, a very annoying bug. This is quite incredible given that this version of Ubuntu is supposed to be long term support. So I decided to give one more chance to Gnome Shell. Installing it on Ubuntu 12.04 is simple with this guide.

To my surprise it is very stable so far. Earlier Gnome Shell versions were not as stable. After installing various extensions (dock especially) it is as usable as Unity for my needs. It seems more responsive as well. I am not really into the Unity new features like HUD. It sounds to me like Ubuntu is making a mistake with Unity compared to Gnome Shell.

To make an old extension support latest Gnome Shell version, it is sometimes necessary  to update  the extension metadata with what's given by gnome-shell --version. For the weather extension you can just edit using gedit:

sudo gedit /usr/share/gnome-shell/extensions/weather@gnome-shell-extensions.gnome.org/metadata.json

Friday, April 27, 2012

John Carmack on Parallelism

This is the interesting bit "Modify some of your utility object code to return new copies instead of self-mutating, and try throwing const in front of practically every non-iterator variable you use".
 
http://www.altdevblogaday.com/2012/04/26/functional-programming-in-c/

John Carmack on Parallelism

This is the interesting bit "Modify some of your utility object code to return new copies instead of self-mutating, and try throwing const in front of practically every non-iterator variable you use".
 
http://www.altdevblogaday.com/2012/04/26/functional-programming-in-c/

Tuesday, March 27, 2012

Google Galaxy Nexus Sound Quality Is Great

Many people are not enthusiastic of this phone sound if you read silly forums. They are wrong! the sound coming out of this thin phone is amazing, at least with high quality headphones. I find the akg q601 incredible with it: much much better than with the old ipod nano or the cowon i7.

In general most complaints i have read about the phone were wrong. The battery is ok, the size is great.

Google Galaxy Nexus Sound Quality Is Great

Many people are not enthusiastic of this phone sound if you read silly forums. They are wrong! the sound coming out of this thin phone is amazing, at least with high quality headphones. I find the akg q601 incredible with it: much much better than with the old ipod nano or the cowon i7.

In general most complaints i have read about the phone were wrong. The battery is ok, the size is great.

Wednesday, February 29, 2012

Why primitive arrays matter in Java

In the past, I have seen that one could greatly improve performance of some Monte-Carlo simulation by using as much as possible double[][] instead of arrays of objects.

It was interesting to read this blog post explaining why that happens: it is all about memory access.

Why primitive arrays matter in Java

In the past, I have seen that one could greatly improve performance of some Monte-Carlo simulation by using as much as possible double[][] instead of arrays of objects.

It was interesting to read this blog post explaining why that happens: it is all about memory access.

Monday, February 06, 2012

Scala Again

I am trying Scala again. Last time, several years ago, I played around with it as a web tool, combining it with a Servlet Runner like Tomcat. This time, I play around with it for some quantitative finance experiments.

Why Scala? It still seem the most advanced alternative to Java on the JVM, and the mix of functional programming and OO programming is interesting. Furthermore it goes quite far as it ships with its own library. I was curious to see if I could express some things better with Scala.

Here are my first impressions after a week:
  • I like the object keyword. It avoids the messy singleton pattern, or the classes with many static methods. I think it makes things much cleaner to not use static at all but distinguish between object & class.
  • I like the Array[Double], and especially ArrayBuffer[Double]. Finally we don't have to worry between the Double and double performance issues.
  • I was a bit annoyed by a(i) instead of a[i] but it makes sense. I wonder if there is a performance implication for arrays, hopefully not.
  • I like the real properties, automatic getter/setter: less boilerplate code, less getThis(), setThat(toto).
  • Very natural interaction with Java libraries.
  • I found a good use of case classes (to my surprise): typically an enum that can have some well defined parameters, and that you don't want to make a class (because it's not). My use case was to define boundaries of a spline.
  • I love the formatter in the scala (eclipse) IDE. Finally a formatter in eclipse that does not produce crap.
Now things I still need time to get used to:
  • member variable declared implicitly in the constructor. I first made the mistake (still?) to declare some variables twice.
  • I got hit by starting a line with a + instead of ending with a +. It is dangerous, but it certainly makes the code more consistent.
  • Performance impacts: I will need to take a look at the bytecode for some scala constructs to really understand the performance impact of some uses. For example I tend to use while loops instead of for comprehension after some scary post of the Twitter guys about for comprehension. But at first, it looks as fast as Java.
  • I wrote my code a bit fast. I am sure I could make use of more Scala features.
  • The scala IDE in eclipse 3.7.1 has known issues. I wish it was a bit more functional, but it's quite ok (search for references works, renaming works to some extent).
  • Scala unit tests: I used scala tests, but it seems a bit funny at first. Also I am not convinced by the syntax that avoid method names and prefer test("test name"). It makes it more difficult to browse the source code.
Some things they should consider:
  • Integrate directly a Log API. I just use SLF4J without any scala wrapper, but it feels like it should be part of the standard API (even if that did not work out so well for Sun).
  • Double.Epsilon is not the machine epsilon: very strange. I found out somewhere else there was the machine epsilon, don't remember where because I ended up just making a small object.
  • Unit tests should be part of the standard API.
Overall I found it quite exciting as there are definitely new ways to solve problems. It was a while since I had been excited with actual coding.

Scala Again

I am trying Scala again. Last time, several years ago, I played around with it as a web tool, combining it with a Servlet Runner like Tomcat. This time, I play around with it for some quantitative finance experiments.

Why Scala? It still seem the most advanced alternative to Java on the JVM, and the mix of functional programming and OO programming is interesting. Furthermore it goes quite far as it ships with its own library. I was curious to see if I could express some things better with Scala.

Here are my first impressions after a week:
  • I like the object keyword. It avoids the messy singleton pattern, or the classes with many static methods. I think it makes things much cleaner to not use static at all but distinguish between object & class.
  • I like the Array[Double], and especially ArrayBuffer[Double]. Finally we don't have to worry between the Double and double performance issues.
  • I was a bit annoyed by a(i) instead of a[i] but it makes sense. I wonder if there is a performance implication for arrays, hopefully not.
  • I like the real properties, automatic getter/setter: less boilerplate code, less getThis(), setThat(toto).
  • Very natural interaction with Java libraries.
  • I found a good use of case classes (to my surprise): typically an enum that can have some well defined parameters, and that you don't want to make a class (because it's not). My use case was to define boundaries of a spline.
  • I love the formatter in the scala (eclipse) IDE. Finally a formatter in eclipse that does not produce crap.
Now things I still need time to get used to:
  • member variable declared implicitly in the constructor. I first made the mistake (still?) to declare some variables twice.
  • I got hit by starting a line with a + instead of ending with a +. It is dangerous, but it certainly makes the code more consistent.
  • Performance impacts: I will need to take a look at the bytecode for some scala constructs to really understand the performance impact of some uses. For example I tend to use while loops instead of for comprehension after some scary post of the Twitter guys about for comprehension. But at first, it looks as fast as Java.
  • I wrote my code a bit fast. I am sure I could make use of more Scala features.
  • The scala IDE in eclipse 3.7.1 has known issues. I wish it was a bit more functional, but it's quite ok (search for references works, renaming works to some extent).
  • Scala unit tests: I used scala tests, but it seems a bit funny at first. Also I am not convinced by the syntax that avoid method names and prefer test("test name"). It makes it more difficult to browse the source code.
Some things they should consider:
  • Integrate directly a Log API. I just use SLF4J without any scala wrapper, but it feels like it should be part of the standard API (even if that did not work out so well for Sun).
  • Double.Epsilon is not the machine epsilon: very strange. I found out somewhere else there was the machine epsilon, don't remember where because I ended up just making a small object.
  • Unit tests should be part of the standard API.
Overall I found it quite exciting as there are definitely new ways to solve problems. It was a while since I had been excited with actual coding.

Friday, January 27, 2012

KDE 4.8 finally has a dock

KDE 4.8 finally has a dock: you just have to add the plasma icon tasks. Also the flexibility around ALT+TAB is welcome. With Krusader as file manager, Thunderbird and Firefox for email and web, it is becoming a real nice desktop, but it took a while since the very bad KDE 4.0 release.

It is easy to install under ubuntu 11.10 through the backports and seems very stable so far.

Something quite important is to tweak the fonts: use Déjà Vu Sans instead of Ubuntu fonts, use RGB subpixel rendering, use Crisp desktop effects. With those settings, KDE looks very nice. It's sad that they are not default in Kubuntu.

Update March 2013: It's been a while now that it is in the standard Ubuntu repositories and I believe installed by default, one has just to remove the task manager widget add the icon task widget:
One can also change the settings using a right click (I find useful not to highlight the windows) and it can look like:


KDE 4.8 finally has a dock

KDE 4.8 finally has a dock: you just have to add the plasma icon tasks. Also the flexibility around ALT+TAB is welcome. With Krusader as file manager, Thunderbird and Firefox for email and web, it is becoming a real nice desktop, but it took a while since the very bad KDE 4.0 release.

It is easy to install under ubuntu 11.10 through the backports and seems very stable so far.

Something quite important is to tweak the fonts: use Déjà Vu Sans instead of Ubuntu fonts, use RGB subpixel rendering, use Crisp desktop effects. With those settings, KDE looks very nice. It's sad that they are not default in Kubuntu.

Update March 2013: It's been a while now that it is in the standard Ubuntu repositories and I believe installed by default, one has just to remove the task manager widget add the icon task widget:
One can also change the settings using a right click (I find useful not to highlight the windows) and it can look like:


Wednesday, January 11, 2012

List of companies where I have been an employee

Intern:
Siemens (Berlin)
IBM (Boeblingen)
Osram Sylvania (Beverly, MA)

Employee:
Cap Gemini (Paris) working for Alcatel
Silicomp (Paris) working for Alcatel Nextenso
C2labs / one 0 development (San Francisco, CA) working for Whenmobile, Sony Pictures, GoPix, Technorati.
Credit Agricole alternative (Paris)
Prima solutions (Paris)
Esearchvision (Paris)
Ulink (Paris)
Edifixio (Paris)
Horizon (Paris)
Darty (Paris)
Calypso (Paris)

List of companies where I have been an employee

Intern:
Siemens (Berlin)
IBM (Boeblingen)
Osram Sylvania (Beverly, MA)

Employee:
Cap Gemini (Paris) working for Alcatel
Silicomp (Paris) working for Alcatel Nextenso
C2labs / one 0 development (San Francisco, CA) working for Whenmobile, Sony Pictures, GoPix, Technorati.
Credit Agricole alternative (Paris)
Prima solutions (Paris)
Esearchvision (Paris)
Ulink (Paris)
Edifixio (Paris)
Horizon (Paris)
Darty (Paris)
Calypso (Paris)

Monday, January 09, 2012

Generating random numbers following a given discrete probability distribution

I have never really thought very much about generating random numbers according to a precise discrete distribution, for example to simulate an unfair dice. In finance, we are generally interested in continuous distributions, where there is typically 2 ways: the inverse transform (usually computed in a numerical way), and the acceptance-rejection method (typically the ziggurat). The inverse transform is often preferred, because it's usable method for Quasi Monte-Carlo simulations while the acceptance rejection is not.

I would have thought about the simple way to generate random numbers according to a discrete distribution as first described here. But establishing a link with Huffman encoding is brilliant. Some better performing alternative (unrelated to Huffman) is offered there.

Generating random numbers following a given discrete probability distribution

I have never really thought very much about generating random numbers according to a precise discrete distribution, for example to simulate an unfair dice. In finance, we are generally interested in continuous distributions, where there is typically 2 ways: the inverse transform (usually computed in a numerical way), and the acceptance-rejection method (typically the ziggurat). The inverse transform is often preferred, because it's usable method for Quasi Monte-Carlo simulations while the acceptance rejection is not.

I would have thought about the simple way to generate random numbers according to a discrete distribution as first described here. But establishing a link with Huffman encoding is brilliant. Some better performing alternative (unrelated to Huffman) is offered there.