Sunday, December 23, 2007

Fedora 8 - The Comeback

My previous post about Ubuntu Gutsy vs Fedora 8 was weak. It is difficult to find a very good distro. Depending on the computer, I have had different experiences. On some computers, Ubuntu really shines and work with minimal tweaking. On some others, Ubuntu is unstable/does not handle wireless correctly and Fedora is much more stable.

The main issues I can see with Fedora 8 are:
- LVM by default. I don't think it is a good idea to go LVM by default since lots of basic tools are still not handling it properly. And if you want to read your disk by something else than a distro with LVM you are screwed. Ext3 straight is imho a much wiser choice. Plus it is rarely a problem to resize partitions as it is not something one does often.
- Fewer programs in the repositories available. Under ubuntu, I was using gtkguitune to tune my guitar, it was in the default repositories. It does not exist for Fedora and I did not manage to compile it due to too old dependencies (GTK 1.2). I found accordeur on sourceforge which is a better program and is available as RPM, so in the end I found something. But while searching I saw the choice was not as wide as with Ubuntu.
- Packages too small: it is not exactly clear what packages you need to start compiling programs with Fedora. If you look in the default categorization, way too many things are silly to enable by default. Also generally package management seems less stable/less easy to use than with Ubuntu.

What I like:
- newer Linux kernel, with better scheduler. Fedora seems more responsive.
- bluetooth relatively well handled by default. The standard method of editing /etc/bluetooth/default works well to plug keyboard.
- wifi well handled by default. I had lots of problems with Ubuntu on my computer and the wifi card, I don't have them with Fedora.

Next step:
- maybe try Suse 10.3 as I just found out that Amarok is the best music player on earth today. As music playing is very important for my computer, a KDE based distro makes sense.

But that would be a silly typical Linux user reaction.

Fedora 8 - The Comeback

My previous post about Ubuntu Gutsy vs Fedora 8 was weak. It is difficult to find a very good distro. Depending on the computer, I have had different experiences. On some computers, Ubuntu really shines and work with minimal tweaking. On some others, Ubuntu is unstable/does not handle wireless correctly and Fedora is much more stable.

The main issues I can see with Fedora 8 are:
- LVM by default. I don't think it is a good idea to go LVM by default since lots of basic tools are still not handling it properly. And if you want to read your disk by something else than a distro with LVM you are screwed. Ext3 straight is imho a much wiser choice. Plus it is rarely a problem to resize partitions as it is not something one does often.
- Fewer programs in the repositories available. Under ubuntu, I was using gtkguitune to tune my guitar, it was in the default repositories. It does not exist for Fedora and I did not manage to compile it due to too old dependencies (GTK 1.2). I found accordeur on sourceforge which is a better program and is available as RPM, so in the end I found something. But while searching I saw the choice was not as wide as with Ubuntu.
- Packages too small: it is not exactly clear what packages you need to start compiling programs with Fedora. If you look in the default categorization, way too many things are silly to enable by default. Also generally package management seems less stable/less easy to use than with Ubuntu.

What I like:
- newer Linux kernel, with better scheduler. Fedora seems more responsive.
- bluetooth relatively well handled by default. The standard method of editing /etc/bluetooth/default works well to plug keyboard.
- wifi well handled by default. I had lots of problems with Ubuntu on my computer and the wifi card, I don't have them with Fedora.

Next step:
- maybe try Suse 10.3 as I just found out that Amarok is the best music player on earth today. As music playing is very important for my computer, a KDE based distro makes sense.

But that would be a silly typical Linux user reaction.

Wednesday, December 12, 2007

Haskell Fibonacci Revisited

Recently, there was an interesting post about Haskell performance and Haskell parallelization showing Haskell could outperform C on a simple Fibonacci example.

A friend of mine, Peter (that I seem to manage to constantly piss off) thought about it on another level, saying you could achieve a _MILLION_ times better using a direct formula in C or Java, the Binet formula.

I decided to try as the improvement scale seemed a bit surprising. I first compared a Java recursive fibonacci with a Haskell one. Here are the results for Haskell GHC 6.6.1 vs Java 1.6.0 on Linux for fib(44):


Then I decided to check out the time for fib(44) or any fib at all, I was unable to measure precisely enough since it always came out as 0ms, in Haskell, or in Java. Looping out 10 million times, Java gave out 7.3s and Haskell something similar (but my method to loop 10 million times in Haskell is probably very bad).


The original post actually points to a link that describes various algorithms for Fibonacci. They basically say that for large n, the rounding is not precise enough, they also propose algorithms in log(n). I tried and was really impressed by the performance of those algorithms. Again I could not measure the difference for a single calculation between it and the binet formula as elapsed time is always 0. The binet formula becomes inexact already at n=71 in Java with doubles.


Of course the original post is still quite interesting, it shows how easy it can be to parallelize calculations in Haskell. But the example is silly as another algorithm can lead to 10 millions times the performance. Still Haskell performs well with the shit or good algorithm when compared to Java.

Haskell Fibonacci Revisited

Recently, there was an interesting post about Haskell performance and Haskell parallelization showing Haskell could outperform C on a simple Fibonacci example.

A friend of mine, Peter (that I seem to manage to constantly piss off) thought about it on another level, saying you could achieve a _MILLION_ times better using a direct formula in C or Java, the Binet formula.

I decided to try as the improvement scale seemed a bit surprising. I first compared a Java recursive fibonacci with a Haskell one. Here are the results for Haskell GHC 6.6.1 vs Java 1.6.0 on Linux for fib(44):


Then I decided to check out the time for fib(44) or any fib at all, I was unable to measure precisely enough since it always came out as 0ms, in Haskell, or in Java. Looping out 10 million times, Java gave out 7.3s and Haskell something similar (but my method to loop 10 million times in Haskell is probably very bad).


The original post actually points to a link that describes various algorithms for Fibonacci. They basically say that for large n, the rounding is not precise enough, they also propose algorithms in log(n). I tried and was really impressed by the performance of those algorithms. Again I could not measure the difference for a single calculation between it and the binet formula as elapsed time is always 0. The binet formula becomes inexact already at n=71 in Java with doubles.


Of course the original post is still quite interesting, it shows how easy it can be to parallelize calculations in Haskell. But the example is silly as another algorithm can lead to 10 millions times the performance. Still Haskell performs well with the shit or good algorithm when compared to Java.

Tuesday, November 20, 2007

Ubuntu 7.10 vs Fedora Core 8 - Gutsy vs Werewolf

I was pretty happy with Ubuntu 7.10, but when Fedora 8 came out I decided to give it a try. Last time I tried it it was Core 2 or something like that, and it was NOT good.

At first Fedora 8 looks quite good, has a good Live CD install, reminiscent of Ubuntu.  The positive side is that it is based on the latest Kernel. It manages my Thinkpad T42 very well (suspend, hibernate work). But after a few days, one notice Fedora is not as stable as Ubuntu, for example:
  • I have had weird behavior with windows not being updated properly
  • I experienced big problems when playing with LVM,
  • It is also a general impression when interacting with the system.
One can wonder why Fedora 8 does not install OpenOffice by default.
Ubuntu is IMHO still the king of distros.

Ubuntu 7.10 vs Fedora Core 8 - Gutsy vs Werewolf

I was pretty happy with Ubuntu 7.10, but when Fedora 8 came out I decided to give it a try. Last time I tried it it was Core 2 or something like that, and it was NOT good.

At first Fedora 8 looks quite good, has a good Live CD install, reminiscent of Ubuntu.  The positive side is that it is based on the latest Kernel. It manages my Thinkpad T42 very well (suspend, hibernate work). But after a few days, one notice Fedora is not as stable as Ubuntu, for example:
  • I have had weird behavior with windows not being updated properly
  • I experienced big problems when playing with LVM,
  • It is also a general impression when interacting with the system.
One can wonder why Fedora 8 does not install OpenOffice by default.
Ubuntu is IMHO still the king of distros.

Friday, November 02, 2007

Apache DbUtils Completely Useless

I am disappointed about the Jarkarta Commons DbUtils project. I give a link to it, because it's a bad project (even if written in clean code). It is very simple, but it really does not do much for you.

I was looking for a very simple abstraction of JDBC. I thought bringing Spring in my project would be overkill. After trying DbUtils, I think again. It does not help. It does not handle frequent cases well, and it does not save many lines of code.

I am a bit angry about it as I noticed that by using it, my test program that was taking 2s with straight JDBC before is now using 1 minute!

The reason behind this huge performance penalty is that there is no way to just reuse a PreparedStatement with the existing classes. For each query with a same sql, it will create a new PreparedStatement object, even if you reuse the connection. I am surprised since this is probably why PreparedStatement is used in the first place. How can such a project be part of Jakarta repository?

Now I just wish Spring was more Guice like, maybe I should write a Spring JDBC like layer for Guice.

Apache DbUtils Completely Useless

I am disappointed about the Jarkarta Commons DbUtils project. I give a link to it, because it's a bad project (even if written in clean code). It is very simple, but it really does not do much for you.

I was looking for a very simple abstraction of JDBC. I thought bringing Spring in my project would be overkill. After trying DbUtils, I think again. It does not help. It does not handle frequent cases well, and it does not save many lines of code.

I am a bit angry about it as I noticed that by using it, my test program that was taking 2s with straight JDBC before is now using 1 minute!

The reason behind this huge performance penalty is that there is no way to just reuse a PreparedStatement with the existing classes. For each query with a same sql, it will create a new PreparedStatement object, even if you reuse the connection. I am surprised since this is probably why PreparedStatement is used in the first place. How can such a project be part of Jakarta repository?

Now I just wish Spring was more Guice like, maybe I should write a Spring JDBC like layer for Guice.

Friday, October 12, 2007

I fell in the trap of Boolean.getBoolean()

I was struggling to find a bug in a very simple application, it ended up being something as simple as using the damned Boolean.getBoolean("true") call instead of Boolean.valueOf("true").booleanValue() call.

The Boolean.getBoolean method is something you almost never need to use, as it checks if a particular system property is true or false. There is a similar method for Integer.getInteger, and a quick google search shows I am not the only one to think those method should never have been part of the basic API for Boolean/Integer. It is too easy to confuse with parseBoolean/parseInt, especially as parseBoolean does not exist in JDKs prior to JDK 1.5 (parseInt is older).

I can not imagine the improductivity this method has produced given its part of one of the most used class in the world.

I fell in the trap of Boolean.getBoolean()

I was struggling to find a bug in a very simple application, it ended up being something as simple as using the damned Boolean.getBoolean("true") call instead of Boolean.valueOf("true").booleanValue() call.

The Boolean.getBoolean method is something you almost never need to use, as it checks if a particular system property is true or false. There is a similar method for Integer.getInteger, and a quick google search shows I am not the only one to think those method should never have been part of the basic API for Boolean/Integer. It is too easy to confuse with parseBoolean/parseInt, especially as parseBoolean does not exist in JDKs prior to JDK 1.5 (parseInt is older).

I can not imagine the improductivity this method has produced given its part of one of the most used class in the world.

Tuesday, September 25, 2007

Fast Web Development With Scala

I am currently experimenting with Scala. It seems quite convenient for web applications. Using Tomcat, it is possible to have a very productive developer environment.
Here is a sample Embedded Tomcat you can start in a Scala project:
import java.io._;
import org.apache.catalina._;
import org.apache.catalina.startup._;

object TomcatScalaServer {
  
  val CATALINAHOME : File = new File("../newsbeef.com");
  val WEBAPPS : File = new File(CATALINAHOME,"webapps");
  val ROOT : File = new File(CATALINAHOME,"web");
  val HOSTNAME : String = "localhost";
  val PORT : int 8080;
  
  def await() {
     whiletrue ) {
          try {
            System.out.println("sleeping 100s");
              Thread.sleep100000 );
          catch {
            case ie : InterruptedException =>;
          }
      }
  }
  
  def start() {
    val server = new Embedded();
    server.setCatalinaHome(CATALINAHOME.getAbsolutePath());

    val engine = server.createEngine();
    engine.setDefaultHost(HOSTNAME);

    val host = server.createHost(HOSTNAME, WEBAPPS.getAbsolutePath());
    engine.addChild(host);

    val context = server.createContext("", ROOT.getAbsolutePath());
     context.setParentClassLoader(Thread.currentThread().getContextClassLoader());
     context.setReloadable(true);
    host.addChild(context);

    server.addEngine(engine);

    val http = server.createConnector(HOSTNAME, PORT, false);
    server.addConnector(http);

    server.start();
  }
  
  def main(args: Array[String]) {
    start();
    await();
  }

}

Here is a sample Scala Servlet outputing html directly. This is a simple example, but it shows something important. With Scala, the view layer can just be regular scala classes. There is no need for JSP or other templating languages as Scala already embbeds XML very nicely. By using the reloadable feature of Tomcat (there are also other pure Scala ways) and Eclipse autocompile, changes are instantanously taken in account.
import javax.servlet.http._;

class ScalaServlet extends HttpServlet {

  override def init() {
  }
  
  override def doGet(request : HttpServletRequest , response : HttpServletResponse{
    service(request, response)
  }
  
  override def service(req: HttpServletRequest,resp: HttpServletResponse) { 
    val pw = resp.getWriter();
    var output = <html>
    <head><title>Scala Servlet Test</title></head>
    <body>
      <h1>Hello World!</h1>
    </body>
    </html>
    pw.println(output);
    pw.flush();
  }
}

Now I am eagerly waiting for improvements in the Eclipse Scala plugin (Organise imports, class navigation).

Fast Web Development With Scala

I am currently experimenting with Scala. It seems quite convenient for web applications. Using Tomcat, it is possible to have a very productive developer environment.
Here is a sample Embedded Tomcat you can start in a Scala project:
import java.io._;
import org.apache.catalina._;
import org.apache.catalina.startup._;

object TomcatScalaServer {
  
  val CATALINAHOME : File = new File("../newsbeef.com");
  val WEBAPPS : File = new File(CATALINAHOME,"webapps");
  val ROOT : File = new File(CATALINAHOME,"web");
  val HOSTNAME : String = "localhost";
  val PORT : int 8080;
  
  def await() {
     whiletrue ) {
          try {
            System.out.println("sleeping 100s");
              Thread.sleep100000 );
          catch {
            case ie : InterruptedException =>;
          }
      }
  }
  
  def start() {
    val server = new Embedded();
    server.setCatalinaHome(CATALINAHOME.getAbsolutePath());

    val engine = server.createEngine();
    engine.setDefaultHost(HOSTNAME);

    val host = server.createHost(HOSTNAME, WEBAPPS.getAbsolutePath());
    engine.addChild(host);

    val context = server.createContext("", ROOT.getAbsolutePath());
     context.setParentClassLoader(Thread.currentThread().getContextClassLoader());
     context.setReloadable(true);
    host.addChild(context);

    server.addEngine(engine);

    val http = server.createConnector(HOSTNAME, PORT, false);
    server.addConnector(http);

    server.start();
  }
  
  def main(args: Array[String]) {
    start();
    await();
  }

}

Here is a sample Scala Servlet outputing html directly. This is a simple example, but it shows something important. With Scala, the view layer can just be regular scala classes. There is no need for JSP or other templating languages as Scala already embbeds XML very nicely. By using the reloadable feature of Tomcat (there are also other pure Scala ways) and Eclipse autocompile, changes are instantanously taken in account.
import javax.servlet.http._;

class ScalaServlet extends HttpServlet {

  override def init() {
  }
  
  override def doGet(request : HttpServletRequest , response : HttpServletResponse{
    service(request, response)
  }
  
  override def service(req: HttpServletRequest,resp: HttpServletResponse) { 
    val pw = resp.getWriter();
    var output = <html>
    <head><title>Scala Servlet Test</title></head>
    <body>
      <h1>Hello World!</h1>
    </body>
    </html>
    pw.println(output);
    pw.flush();
  }
}

Now I am eagerly waiting for improvements in the Eclipse Scala plugin (Organise imports, class navigation).

Monday, August 27, 2007

2 Months of Ubuntu on Mac Mini

I am finally happy with my OS. I had previously some complaints about MacOs X and the Mac Mini. It is now over, with Ubuntu, I am very happy of my quiet system.

I use Quod Libet for Audio, it has similar interface as iTunes, with more features (ability to play most audio formats). I chose Quod Libet instead of the standard Rhythmbox because of its practical mp3 tags handling. This also means that unlike iTunes, when I reimport my full library with another player, or on another computer, I have it all organized the right way, because the right meta data is in the audio files and not in a xml file that sometimes gets corrupted.

I can use Open Office (not yet available in non alpha version for Mac Os X).

I can use Picasa or other more standard alternatives instead of iPhoto.

I can use free guitar tuners, plenty of esoteric software.

Remote control, fancy bluetooth apple keyboard, cd burning, dvd player, printer work flawlessly. And it's all free software (except Picasa which is only gratis).

I am happy with my Ubuntu system :).

2 Months of Ubuntu on Mac Mini

I am finally happy with my OS. I had previously some complaints about MacOs X and the Mac Mini. It is now over, with Ubuntu, I am very happy of my quiet system.

I use Quod Libet for Audio, it has similar interface as iTunes, with more features (ability to play most audio formats). I chose Quod Libet instead of the standard Rhythmbox because of its practical mp3 tags handling. This also means that unlike iTunes, when I reimport my full library with another player, or on another computer, I have it all organized the right way, because the right meta data is in the audio files and not in a xml file that sometimes gets corrupted.

I can use Open Office (not yet available in non alpha version for Mac Os X).

I can use Picasa or other more standard alternatives instead of iPhoto.

I can use free guitar tuners, plenty of esoteric software.

Remote control, fancy bluetooth apple keyboard, cd burning, dvd player, printer work flawlessly. And it's all free software (except Picasa which is only gratis).

I am happy with my Ubuntu system :).

Thursday, August 23, 2007

Spring Web Services, Finally!

Spring Web Services seems to be the technology I have been looking for recently. I am not a Spring bigot (too XML oriented), but here the Spring folks have something right.

I used to work with Web Services the simple way: create a java class (or EJB), expose it as Web Service through Axis or RAD, generating the WSDL in the process. And then a client would just be the reverse, take the WSDL, use a tool (Axis or RAD) that creates client Java classes from it automatically. Simple, easy.

But this process starts to fail if you have
  1. several very similar WSDL: you want reuse instead of copy.
  2. other means of communicating XML represented by the XML schema embedded in the WSDL, for example via direct MQ use.
In those cases, the contract first approach is particularly interesting. However most tools, if they allow contract first approach, they don't give you enough access on the message itself, and you can do 1), but not 2). I always found a bit silly that Axis or RAD had to have the logic to marshall/unmarshall java objects, but they did not give any explicit API access to do it, or to replace it with a standard way (JAXB 2 for example).

I found 2 techs that can help:
  • SDOs (Service Data Objects): from my short experience, I find it a bit too verbose, and not yet fully mature, as you depend on libraries external to SDO ones for it to work in the case of web services. It can work, and if you use IBM products, it could be a good way to write Web Services Providers/Clients.
  • Spring Web Services: I have not tried it yet, but it seems to solve exactly the kind of problems I described earlier. And you can plug-in any marshalling/unmarshalling framework you want :).
There are so many libraries to do web services, and different approaches, that an initiative like Spring Web Services is more than welcome!

Spring Web Services, Finally!

Spring Web Services seems to be the technology I have been looking for recently. I am not a Spring bigot (too XML oriented), but here the Spring folks have something right.

I used to work with Web Services the simple way: create a java class (or EJB), expose it as Web Service through Axis or RAD, generating the WSDL in the process. And then a client would just be the reverse, take the WSDL, use a tool (Axis or RAD) that creates client Java classes from it automatically. Simple, easy.

But this process starts to fail if you have
  1. several very similar WSDL: you want reuse instead of copy.
  2. other means of communicating XML represented by the XML schema embedded in the WSDL, for example via direct MQ use.
In those cases, the contract first approach is particularly interesting. However most tools, if they allow contract first approach, they don't give you enough access on the message itself, and you can do 1), but not 2). I always found a bit silly that Axis or RAD had to have the logic to marshall/unmarshall java objects, but they did not give any explicit API access to do it, or to replace it with a standard way (JAXB 2 for example).

I found 2 techs that can help:
  • SDOs (Service Data Objects): from my short experience, I find it a bit too verbose, and not yet fully mature, as you depend on libraries external to SDO ones for it to work in the case of web services. It can work, and if you use IBM products, it could be a good way to write Web Services Providers/Clients.
  • Spring Web Services: I have not tried it yet, but it seems to solve exactly the kind of problems I described earlier. And you can plug-in any marshalling/unmarshalling framework you want :).
There are so many libraries to do web services, and different approaches, that an initiative like Spring Web Services is more than welcome!

Thursday, August 02, 2007

Original Pattern: ServletRequest in ThreadLocal

After seeing Scala had elements of Erlang through Actors, I decided to take a closer look at the language. There is an interesting new web framework in Scala, called Lift. One drawback of Lift is that it seems to be very cutting edge and not that easy to grasp. While reading its source code, I stumbled upon a strange pattern: Storing the ServletRequest in a ThreadLocal .

I had not seen that before, and was wondering why one would do such a thing. It seems to be unintuitive. I found my answer through... GWT widgets. In this page, the author explain motivations behind doing such a thing:

While not 100% in tune with the MVC pattern, it is often convenient to access the servlet
container, the HTTP session or the current HTTP request from the business layer. The GWT-SL
provides several strategies to achieve this which pose a compromise in the amount of configuration
required to set up and the class dependencies introduced to the business code.

The easiest way to obtain the current HTTP request is by using the ServletUtils class
which provides convenience methods for accessing the HttpServletRequest and
HttpServletResponse instances. Please note that it makes use of thread local variables
and will obviously not return correct values if used in any other than the invoking thread.

Still one can doubt if this is good design. In my long experience of web apps in Java I never had the need to do such a thing. Have you seen that pattern before?

Original Pattern: ServletRequest in ThreadLocal

After seeing Scala had elements of Erlang through Actors, I decided to take a closer look at the language. There is an interesting new web framework in Scala, called Lift. One drawback of Lift is that it seems to be very cutting edge and not that easy to grasp. While reading its source code, I stumbled upon a strange pattern: Storing the ServletRequest in a ThreadLocal .

I had not seen that before, and was wondering why one would do such a thing. It seems to be unintuitive. I found my answer through... GWT widgets. In this page, the author explain motivations behind doing such a thing:

While not 100% in tune with the MVC pattern, it is often convenient to access the servlet
container, the HTTP session or the current HTTP request from the business layer. The GWT-SL
provides several strategies to achieve this which pose a compromise in the amount of configuration
required to set up and the class dependencies introduced to the business code.

The easiest way to obtain the current HTTP request is by using the ServletUtils class
which provides convenience methods for accessing the HttpServletRequest and
HttpServletResponse instances. Please note that it makes use of thread local variables
and will obviously not return correct values if used in any other than the invoking thread.

Still one can doubt if this is good design. In my long experience of web apps in Java I never had the need to do such a thing. Have you seen that pattern before?

Friday, July 27, 2007

Vim setup

Here is my Vim setup information for reference

in .vimrc or _vimrc, add at the beginning:
set langmenu=en_US.ISO_8859-1
set gfn=Bitstream_Vera_Sans_Mono:h9:cANSI
colorscheme oceandeep

First line is to avoid menus in French.
The font (you can find here) is simply the best programmer's font.
oceandeep mode can be found here .

Vim setup

Here is my Vim setup information for reference

in .vimrc or _vimrc, add at the beginning:
set langmenu=en_US.ISO_8859-1
set gfn=Bitstream_Vera_Sans_Mono:h9:cANSI
colorscheme oceandeep

First line is to avoid menus in French.
The font (you can find here) is simply the best programmer's font.
oceandeep mode can be found here .

Why Eclipse Is Better

Initially I adopted Eclipse instead of Emacs because it was more powerful to search code, and it allowed refactoring. I regularly tried other IDEs but always end up back to Eclipse, even though there has been less big improvements in Eclipse in the past years (but lots of small ones).

I just saw today that Eclipse allowed programmatic refactoring. Now that's something quite amazing, and I don't think other IDEs do that yet. Someone even had fun writing an Eclipse extension in Scala to add a particular kind of refactoring to Eclipse.

Why Eclipse Is Better

Initially I adopted Eclipse instead of Emacs because it was more powerful to search code, and it allowed refactoring. I regularly tried other IDEs but always end up back to Eclipse, even though there has been less big improvements in Eclipse in the past years (but lots of small ones).

I just saw today that Eclipse allowed programmatic refactoring. Now that's something quite amazing, and I don't think other IDEs do that yet. Someone even had fun writing an Eclipse extension in Scala to add a particular kind of refactoring to Eclipse.

Wednesday, July 11, 2007

Tapestry5 vs Wicket: 1 - 0

Getting started with Tapestry 5 is easier than with Wicket 1.3. Some readers will complain that it is again the view of someone who has no deep knowledge of either Tapestry or Wicket. But I think it is important for projects to be easily accessible to developers. Wicket seems to have more buzz around these days, and has a detailed wiki with plenty of useful information in it. But that's the problem I see with Wicket, it is not simple to do simple things, that is why there is so much information to do simple things in the Wicket wiki.

Granted my test was based on a specific case for component frameworks, I was not so much interested into statefulness, I wanted to display a bookmarkable "user page" with content coming from hibernate.This kind of behaviour is quite general in web applications, especially in web 2.0.

It was relatively easy to have the page working with Wicket, although I was disappointed at their hibernate integration. Hibernate integration in wicket means either using the full databinder project, or creating your own solution. I chose the later based on source code from databinder, but I actually rewrote everything in the end. I was disappointed that databinder, a specific Hibernate oriented framework did not really handle Hibernate sessions the simplest way possible. Tapestry5 got that right. To manage Hibernate sessions right, I had to dwelve into Wicket code as no documentation offers insight about inner workings of wicket. The code was too complex for my taste. In my short experience, I saw it seemed the developers are changing it to the better, removing some unnecessary abstractions.

In the end I got frustrated many times with Wicket, and did not manage to have a bookmarkable page the way I wanted. You can have a bookmarkable page, but after some action on the page, it would become unbookmarkable. Furthermore, the structure of the URL is not very flexible without yourself rewriting completely the bookmarkable page feature of Wicket.

With Tapestry5, I was at first worried about the small amount of documentation on the site, the use of maven in the tutorial. I was wrong, documentation proved to be exactly what I needed, and detailed enough. It is much easier to understand how Tapestry5 works after reading the doc than Wicket. Concepts in Tapestry5 are simpler and more powerful. Maven use is in the end not that big of a deal, I am still not as comfortable with it but I am productive enough that it is not an issue, much more productive than with Wicket. The standard tutorial setup is a very good one.

Doing a bookmarkable page was trivial, it also was easy to have the format i wanted, and it was kept after action in the location bar. Hibernate integration was trivial, since Tapestry5 provides the tapestry-hibernate module, a few classes that helps managing the session and transactions for you. The only drawback is maybe the yet another inversion control system to learn. Tapestry5 IoC is very near from Guice in its philosophy. I wish Guice was made the default for IoC in Tapestry5.

To conclude, there is no doubt about it, Tapestry5 is the winner.

Tapestry5 vs Wicket: 1 - 0

Getting started with Tapestry 5 is easier than with Wicket 1.3. Some readers will complain that it is again the view of someone who has no deep knowledge of either Tapestry or Wicket. But I think it is important for projects to be easily accessible to developers. Wicket seems to have more buzz around these days, and has a detailed wiki with plenty of useful information in it. But that's the problem I see with Wicket, it is not simple to do simple things, that is why there is so much information to do simple things in the Wicket wiki.

Granted my test was based on a specific case for component frameworks, I was not so much interested into statefulness, I wanted to display a bookmarkable "user page" with content coming from hibernate.This kind of behaviour is quite general in web applications, especially in web 2.0.

It was relatively easy to have the page working with Wicket, although I was disappointed at their hibernate integration. Hibernate integration in wicket means either using the full databinder project, or creating your own solution. I chose the later based on source code from databinder, but I actually rewrote everything in the end. I was disappointed that databinder, a specific Hibernate oriented framework did not really handle Hibernate sessions the simplest way possible. Tapestry5 got that right. To manage Hibernate sessions right, I had to dwelve into Wicket code as no documentation offers insight about inner workings of wicket. The code was too complex for my taste. In my short experience, I saw it seemed the developers are changing it to the better, removing some unnecessary abstractions.

In the end I got frustrated many times with Wicket, and did not manage to have a bookmarkable page the way I wanted. You can have a bookmarkable page, but after some action on the page, it would become unbookmarkable. Furthermore, the structure of the URL is not very flexible without yourself rewriting completely the bookmarkable page feature of Wicket.

With Tapestry5, I was at first worried about the small amount of documentation on the site, the use of maven in the tutorial. I was wrong, documentation proved to be exactly what I needed, and detailed enough. It is much easier to understand how Tapestry5 works after reading the doc than Wicket. Concepts in Tapestry5 are simpler and more powerful. Maven use is in the end not that big of a deal, I am still not as comfortable with it but I am productive enough that it is not an issue, much more productive than with Wicket. The standard tutorial setup is a very good one.

Doing a bookmarkable page was trivial, it also was easy to have the format i wanted, and it was kept after action in the location bar. Hibernate integration was trivial, since Tapestry5 provides the tapestry-hibernate module, a few classes that helps managing the session and transactions for you. The only drawback is maybe the yet another inversion control system to learn. Tapestry5 IoC is very near from Guice in its philosophy. I wish Guice was made the default for IoC in Tapestry5.

To conclude, there is no doubt about it, Tapestry5 is the winner.

Saturday, June 30, 2007

NetBeans 6.0M10 out without announcement yet!

I just found it while browsing netbeans website, here is the link. Netbeans is starting to be much more interesting that it used to be  before 5.5, even though shortcuts are a pain, because so different from most other editors, and not always defined for important tasks. I like the all integrated feeling without plugin and slugishness by default.

NetBeans 6.0M10 out without announcement yet!

I just found it while browsing netbeans website, here is the link. Netbeans is starting to be much more interesting that it used to be  before 5.5, even though shortcuts are a pain, because so different from most other editors, and not always defined for important tasks. I like the all integrated feeling without plugin and slugishness by default.

Tuesday, June 12, 2007

Use ORM For Better Performance

This is not something I would have though a few years ago. It is something I learnt after working on many different projects, some using an ORM layer like Hibernate, Entity EJBs, or JDO, some using JDBC approach via Spring Templates or custom frameworks. Many projects that use ORM have performance problems, that don't seem that common with projects using JDBC. But the size of the database model of ORM projects is often much bigger than the one of JDBC projects (which actually makes sense). If you have only a few queries to do, why bother with ORM? This would be complexity for nothing.

But for most enterprise projects, the size of the database model is quite big, and the model itself can be complex (many relations between many tables). With this kind of model, ORM is more efficient. It is faster to develop with, creates less bugs due to string misspelled, or types badly read. It is also better performing. Doing 1 giant query to retrieve everything in 1 step is not faster, especially if you don't always need all the information retrieved. In a complex model, many cases are specifics, only useful in 10% of the cases. The temptation is high with a JDBC approach to do one giant query, because it is substantially longer (and more work) to do N queries.  With ORM, it is a bit the opposite, by default N queries is easier to do. The problem is that N(ORM) tends to be very high if one is not careful with the mapping to avoid the N+1 problem. However it is simpler to reduce the number of queries by joining tables, rather than splitting queries, ORM performance optimization feels more natural.

Martin Fowler tends to be also pro ORM in its "Domain Logic and SQL" article. He also mentions something interesting about SQL query optimization:

It's also worth pointing out that this example is one that plays to a database's strengths. Many queries don't have the strong elements of selection and aggregation that this one does, and won't show such a performance change. In addition multi-user scenarios often cause surprising changes to the way queries behave, so real profiling has to be done under a realistic multi-user load. You may find that locking issues outweigh anything you can get by faster individual queries.


In the end it is up to us to make ORM or JDBC approach perform. JDBC provides much more direct access to database, and in benchmarks (always simple database models) or in theory it should be faster. But in the real world, I argue that ORM optimization is simpler and therefore, often ORM projects will perform better.

Use ORM For Better Performance

This is not something I would have though a few years ago. It is something I learnt after working on many different projects, some using an ORM layer like Hibernate, Entity EJBs, or JDO, some using JDBC approach via Spring Templates or custom frameworks. Many projects that use ORM have performance problems, that don't seem that common with projects using JDBC. But the size of the database model of ORM projects is often much bigger than the one of JDBC projects (which actually makes sense). If you have only a few queries to do, why bother with ORM? This would be complexity for nothing.

But for most enterprise projects, the size of the database model is quite big, and the model itself can be complex (many relations between many tables). With this kind of model, ORM is more efficient. It is faster to develop with, creates less bugs due to string misspelled, or types badly read. It is also better performing. Doing 1 giant query to retrieve everything in 1 step is not faster, especially if you don't always need all the information retrieved. In a complex model, many cases are specifics, only useful in 10% of the cases. The temptation is high with a JDBC approach to do one giant query, because it is substantially longer (and more work) to do N queries.  With ORM, it is a bit the opposite, by default N queries is easier to do. The problem is that N(ORM) tends to be very high if one is not careful with the mapping to avoid the N+1 problem. However it is simpler to reduce the number of queries by joining tables, rather than splitting queries, ORM performance optimization feels more natural.

Martin Fowler tends to be also pro ORM in its "Domain Logic and SQL" article. He also mentions something interesting about SQL query optimization:

It's also worth pointing out that this example is one that plays to a database's strengths. Many queries don't have the strong elements of selection and aggregation that this one does, and won't show such a performance change. In addition multi-user scenarios often cause surprising changes to the way queries behave, so real profiling has to be done under a realistic multi-user load. You may find that locking issues outweigh anything you can get by faster individual queries.


In the end it is up to us to make ORM or JDBC approach perform. JDBC provides much more direct access to database, and in benchmarks (always simple database models) or in theory it should be faster. But in the real world, I argue that ORM optimization is simpler and therefore, often ORM projects will perform better.

Wednesday, May 30, 2007

People Using Spring, EJBs Don't Know Basic JDBC

I recently found a bug in software we are developing. I traced it and found the root was improper JDBC handling. The application is written using EJBs, Spring and plenty of other relatively complex technologies. I was surprised that developers who were able to use all those technologies had no understanding of basic JDBC.

They fetched all the data (including double, decimal numbers) from the database as String using rs.getString() !

While this is most of the time possible, it is also most of the time not desirable (in the code they were actually converting it to numbers, etc.). More importantly, this can lead to nasty bugs due to different Locales (the . vs , game for example). And this is what's happening in our application.

People Using Spring, EJBs Don't Know Basic JDBC

I recently found a bug in software we are developing. I traced it and found the root was improper JDBC handling. The application is written using EJBs, Spring and plenty of other relatively complex technologies. I was surprised that developers who were able to use all those technologies had no understanding of basic JDBC.

They fetched all the data (including double, decimal numbers) from the database as String using rs.getString() !

While this is most of the time possible, it is also most of the time not desirable (in the code they were actually converting it to numbers, etc.). More importantly, this can lead to nasty bugs due to different Locales (the . vs , game for example). And this is what's happening in our application.

Wednesday, May 16, 2007

Wizards Bad For Productivity

IBM RAD comes with many wizards, to create EJBs, to create Web Services, do struts mapping... They are quite well done, making EJB < 3.0 usable, and Web Services look simple.

But wizards sucks at:
  • typos correction
  • repetition

But when you do a typo in your wizards, then all the files generated/changed are wrong, and you don't necessarily know if you can just do a search and replace. Plus you don't necessarily know all the files that were affected by the typo.

In RAD, there is even a wizard to help you create a JSP. What does it do? Well, for example it generates those 2-3 lines of tag libraries include. The first time you write a JSP, it might help you, but the second time, it's much faster to copy/paste. Generally, when you have to do an wizard operation many times, it is much slower than doing it via copy/paste and few modifications here and there, or to use a more configurable alternative like XDoclets.

The other complaint I have about wizards, is that they hide too much how things are working. As anybody can use them, it makes you think the operation it does and technologies behind are simple.

More generally it feels to me like wizards are only there because of some over-complicated design somewhere. EJB 3.0 are much nicer to work with than EJB 2, a wizard will help you much less with EJB 3.0, and yet EJB 3.0 are more powerful.

Wizards Bad For Productivity

IBM RAD comes with many wizards, to create EJBs, to create Web Services, do struts mapping... They are quite well done, making EJB < 3.0 usable, and Web Services look simple.

But wizards sucks at:
  • typos correction
  • repetition

But when you do a typo in your wizards, then all the files generated/changed are wrong, and you don't necessarily know if you can just do a search and replace. Plus you don't necessarily know all the files that were affected by the typo.

In RAD, there is even a wizard to help you create a JSP. What does it do? Well, for example it generates those 2-3 lines of tag libraries include. The first time you write a JSP, it might help you, but the second time, it's much faster to copy/paste. Generally, when you have to do an wizard operation many times, it is much slower than doing it via copy/paste and few modifications here and there, or to use a more configurable alternative like XDoclets.

The other complaint I have about wizards, is that they hide too much how things are working. As anybody can use them, it makes you think the operation it does and technologies behind are simple.

More generally it feels to me like wizards are only there because of some over-complicated design somewhere. EJB 3.0 are much nicer to work with than EJB 2, a wizard will help you much less with EJB 3.0, and yet EJB 3.0 are more powerful.

Sunday, April 29, 2007

Less Productive With Maven2.

My first trials of Maven were failures. As I am stubborn, I tried again, on a new project, a quite simple one. It works, but it makes some easy things overkill. And the default way of using it makes a developer lose lots of time.

If I have a project with common classes, a standalone app, and a web app, then logically you do 3 projects, 2 of them depending on the common one. That's how the default maven setup works, and that's what their documentation presents. Now when using maven eclipse, this will create 3 project, none depending on each other. If you modify something in the common code, it won't be seen by any of the other code, you have to publish it with maven first, this takes way too much time. Furthermore I did not see any way to force rebuild the common automatically from one of the other project. If you modify code in common and web app project, you need to call maven twice. I find all this very counterproductive, because you do those steps extremely often. Now there are probably some ways to do that with Maven2, but it is not the default behavior. I could add project dependencies in eclipse manually, and forget about maven while working in eclipse, but then the maven eclipse plugin is really useless. And you'll face the same issues when you want to use maven tomcat deploy.

Even more worrying, after moving back to Ant, I saw a strange bug with Spring context loading disappear. Maven is hiding so much, that it becomes not obvious how your app is deployed.

Developers lose power with Maven. It's a pain to do something a bit differently that the default Maven way. With Ant, people gain power. I see both as being the distinction between a framework approach (Maven) and a library API approach (Ant). By default, Maven tries to do a lot, while Ant tries to do nothing. It's very easy to build exactly what you need with Ant, while it is of course difficult with Maven.

Some parts of Spring have a similar disadvantage to Maven. If you do everything in XML with the most Spring magic, you'll spend hours trying to figure out how to do things and why it does not seem to work like you think it should. If you use Spring as an API, like the wonderful Spring JDBC, development will be fast (faster than with straight JDBC for example), and your program flow is easy to follow.

Less Productive With Maven2.

My first trials of Maven were failures. As I am stubborn, I tried again, on a new project, a quite simple one. It works, but it makes some easy things overkill. And the default way of using it makes a developer lose lots of time.

If I have a project with common classes, a standalone app, and a web app, then logically you do 3 projects, 2 of them depending on the common one. That's how the default maven setup works, and that's what their documentation presents. Now when using maven eclipse, this will create 3 project, none depending on each other. If you modify something in the common code, it won't be seen by any of the other code, you have to publish it with maven first, this takes way too much time. Furthermore I did not see any way to force rebuild the common automatically from one of the other project. If you modify code in common and web app project, you need to call maven twice. I find all this very counterproductive, because you do those steps extremely often. Now there are probably some ways to do that with Maven2, but it is not the default behavior. I could add project dependencies in eclipse manually, and forget about maven while working in eclipse, but then the maven eclipse plugin is really useless. And you'll face the same issues when you want to use maven tomcat deploy.

Even more worrying, after moving back to Ant, I saw a strange bug with Spring context loading disappear. Maven is hiding so much, that it becomes not obvious how your app is deployed.

Developers lose power with Maven. It's a pain to do something a bit differently that the default Maven way. With Ant, people gain power. I see both as being the distinction between a framework approach (Maven) and a library API approach (Ant). By default, Maven tries to do a lot, while Ant tries to do nothing. It's very easy to build exactly what you need with Ant, while it is of course difficult with Maven.

Some parts of Spring have a similar disadvantage to Maven. If you do everything in XML with the most Spring magic, you'll spend hours trying to figure out how to do things and why it does not seem to work like you think it should. If you use Spring as an API, like the wonderful Spring JDBC, development will be fast (faster than with straight JDBC for example), and your program flow is easy to follow.

Friday, April 27, 2007

How to Build Good Software? Good network connection

Not having good internet connection can be problematic to download new libraries, read or search for documentation on development subjects. But not having a good internal network connection is killer of productivity. It means sometimes not being able to access integration, preprod or even production environment, or ssh session not responding in the middle of an action. As software makes an increasing use of the network, it means not being able to test or to use correctly all kind of software.

How to Build Good Software? Good network connection

Not having good internet connection can be problematic to download new libraries, read or search for documentation on development subjects. But not having a good internal network connection is killer of productivity. It means sometimes not being able to access integration, preprod or even production environment, or ssh session not responding in the middle of an action. As software makes an increasing use of the network, it means not being able to test or to use correctly all kind of software.

Wednesday, April 25, 2007

How to Build Good Software? Private office, again

Apparently it's more a habit of French companies to have big open spaces with no separation at all between people. There is nothing more annoying than having people in conference call in front of you while you are trying to work on something completely different. French people forgot the cubicle part in the American open space idea. So sometimes the room is just a big mess, everybody being able to disturb you anytime. Even if I have no private office, please give me at least a cubicle.

How to Build Good Software? Private office, again

Apparently it's more a habit of French companies to have big open spaces with no separation at all between people. There is nothing more annoying than having people in conference call in front of you while you are trying to work on something completely different. French people forgot the cubicle part in the American open space idea. So sometimes the room is just a big mess, everybody being able to disturb you anytime. Even if I have no private office, please give me at least a cubicle.

Wednesday, April 18, 2007

How to Build Good Software? Welcome newcomers

Some companies do it naturally, some really don't. In small companies, it is almost natural, people will make a newcomer productive very quickly. In a big company it's not the same game.

Some important points are:

  • Computer ready the first day, well sized (right ram, right power, developers are not MS office users), right OS. I had experience with having not the right amount of ram, not the right version of OS, not the right user rights to install and use critical software for my work, and all those were known from the team. I also hate when companies give the cheapest computer available for developers/architects. On top of that badly configured computers take often a month to be ready in big companies. It just does not makes sense.
  • Network access. I have seen people coming for a short contract and not having network account or email account before 1 week.
  • Give documents to read, show applications the person will work with. Involve newcomer in new decisions on his project.
Some big companies have it right. I remember my internship at IBM Germany, where on the first day I had a box waiting for me with computer inside, that I had to unpack and install (with OS/2) for my use. I think this is the best way for developers/tech people. And then they recommended excellent reading on the subject I would be working on. It's not that difficult.

How to Build Good Software? Welcome newcomers

Some companies do it naturally, some really don't. In small companies, it is almost natural, people will make a newcomer productive very quickly. In a big company it's not the same game.

Some important points are:

  • Computer ready the first day, well sized (right ram, right power, developers are not MS office users), right OS. I had experience with having not the right amount of ram, not the right version of OS, not the right user rights to install and use critical software for my work, and all those were known from the team. I also hate when companies give the cheapest computer available for developers/architects. On top of that badly configured computers take often a month to be ready in big companies. It just does not makes sense.
  • Network access. I have seen people coming for a short contract and not having network account or email account before 1 week.
  • Give documents to read, show applications the person will work with. Involve newcomer in new decisions on his project.
Some big companies have it right. I remember my internship at IBM Germany, where on the first day I had a box waiting for me with computer inside, that I had to unpack and install (with OS/2) for my use. I think this is the best way for developers/tech people. And then they recommended excellent reading on the subject I would be working on. It's not that difficult.

Friday, April 13, 2007

Project Estimations And Fibonacci Sequence.

I was recently in a meeting where use case complexity was estimated using numbers in the Fibonacci sequence. I was surprised by the choice of the Fibonacci sequence. Why not any sequence? Why a particular one? I googled and found the culprit, Mr Mike Cohn in his book Agile Estimating and Planning. It's actually not a bad sequence to choose, since the scale is increasing constantly, so by picking up numbers in this sequence, you can quite accurately describe estimation. If you have defined complexities of 1,2,3,16,17 corresponding to 5 different use cases then obviously 16 or 17 denotes the same complexity, and it would be surprising that you can really distinguish both. You need an ever increasing scale. But a power of 2 scale might not be precise enough (steps are growing too fast).

Still I think the main reason for him to chose Fibonacci sequence is due to Da Vinci Code, that was just popular at that time in 2004 when he wrote his book. And then this particular series seduces people easily, be it because of the da vinci code book, or because of a mathematical tool that gives the impression our estimations are better, even if there is no real mathematical reason to use it.

Project Estimations And Fibonacci Sequence.

I was recently in a meeting where use case complexity was estimated using numbers in the Fibonacci sequence. I was surprised by the choice of the Fibonacci sequence. Why not any sequence? Why a particular one? I googled and found the culprit, Mr Mike Cohn in his book Agile Estimating and Planning. It's actually not a bad sequence to choose, since the scale is increasing constantly, so by picking up numbers in this sequence, you can quite accurately describe estimation. If you have defined complexities of 1,2,3,16,17 corresponding to 5 different use cases then obviously 16 or 17 denotes the same complexity, and it would be surprising that you can really distinguish both. You need an ever increasing scale. But a power of 2 scale might not be precise enough (steps are growing too fast).

Still I think the main reason for him to chose Fibonacci sequence is due to Da Vinci Code, that was just popular at that time in 2004 when he wrote his book. And then this particular series seduces people easily, be it because of the da vinci code book, or because of a mathematical tool that gives the impression our estimations are better, even if there is no real mathematical reason to use it.

How to Build Good Software? Use a bug management software, really.

This will seem obvious, unfortunately, when people are involved, nothing is that obvious. It's not because you setup a bug/feature management software that people will use it. You have to force people to go through the bug management software each time they want something fixed. If you don't do that some people will keep sending incomplete mails, or worse call you to get something fixed, that will be forgotten in a week. It is also very useful to avoid receiving 10x the same request from the same person.

How to Build Good Software? Use a bug management software, really.

This will seem obvious, unfortunately, when people are involved, nothing is that obvious. It's not because you setup a bug/feature management software that people will use it. You have to force people to go through the bug management software each time they want something fixed. If you don't do that some people will keep sending incomplete mails, or worse call you to get something fixed, that will be forgotten in a week. It is also very useful to avoid receiving 10x the same request from the same person.

Thursday, April 12, 2007

How to Build Good Software? Have a good build process

Important points are: standard code hierarchy, automatic download of dependencies, a distribution command with versioning support and source control interaction, a simple command to build each part of the project. In Java best candidates are a sophiticated ant build, or maven2. Maven2 is quite good since it forces you to do some of those steps, even though I think ant can't really be avoided for many more specific tasks.

Once, I got wrong code before I left for a customer site, because code given to the customer was not checked into a source repository everytime before it was given to the customer. Furthermore the code I had could not be built because of missing jars, and the project structure forced some source directories containing source code common with other projects to be in a very specific directories at a particular level, because project isolation was poor. It took me 7 days to understand their very awkward build process.

A good build process is dependent on good CVS (or other source control system) management, which is itself dependent on good code split. CVS head should always compile. If CVS head does not compile, then people will start trying to work around it. Unfortunatly working around it means use it as rarely as possible (either by just not updating your code frequently, or by working in a branch that can not merge well because head is in a bad state). I have seen that on projects, the result is integration headaches, and extremely poor overall quality.

An good check for a build process is to see how long and how many steps you have to do to build the latest deliverable on a new machine, and how difficult it is to put that in production.


How to Build Good Software? Have a good build process

Important points are: standard code hierarchy, automatic download of dependencies, a distribution command with versioning support and source control interaction, a simple command to build each part of the project. In Java best candidates are a sophiticated ant build, or maven2. Maven2 is quite good since it forces you to do some of those steps, even though I think ant can't really be avoided for many more specific tasks.

Once, I got wrong code before I left for a customer site, because code given to the customer was not checked into a source repository everytime before it was given to the customer. Furthermore the code I had could not be built because of missing jars, and the project structure forced some source directories containing source code common with other projects to be in a very specific directories at a particular level, because project isolation was poor. It took me 7 days to understand their very awkward build process.

A good build process is dependent on good CVS (or other source control system) management, which is itself dependent on good code split. CVS head should always compile. If CVS head does not compile, then people will start trying to work around it. Unfortunatly working around it means use it as rarely as possible (either by just not updating your code frequently, or by working in a branch that can not merge well because head is in a bad state). I have seen that on projects, the result is integration headaches, and extremely poor overall quality.

An good check for a build process is to see how long and how many steps you have to do to build the latest deliverable on a new machine, and how difficult it is to put that in production.


Wednesday, April 11, 2007

How to Build Good Software? Private Office

In an open space, people keep on coming to discuss various issues with various people, issues that have nothing to do with your work. You end up either being distracted, or annoyed by the increased noise level.

Apparently at Microsoft, they have private offices for each programmer. It might be extreme, paradoxally not in the XP (extreme programming) sense, but it is much better than open space for productivity. In XP, it's almost the opposite with 2 developers working often together.

How to Build Good Software? Private Office

In an open space, people keep on coming to discuss various issues with various people, issues that have nothing to do with your work. You end up either being distracted, or annoyed by the increased noise level.

Apparently at Microsoft, they have private offices for each programmer. It might be extreme, paradoxally not in the XP (extreme programming) sense, but it is much better than open space for productivity. In XP, it's almost the opposite with 2 developers working often together.

Tuesday, April 10, 2007

How to Build Good Software? Lay Off Quickly.

If you have to lay off in your job, do it quickly. I don't understand companies that want to keep someone as long as legally possible when this someone wants to leave. First the employee won't be as motivated, but more importantly, you will continue to train that person to your company's software and ways of work. This would be much better used on another person, that will stay in the company. Time for layoffs should exclusively be used for knowledge transfer.

How to Build Good Software? Lay Off Quickly.

If you have to lay off in your job, do it quickly. I don't understand companies that want to keep someone as long as legally possible when this someone wants to leave. First the employee won't be as motivated, but more importantly, you will continue to train that person to your company's software and ways of work. This would be much better used on another person, that will stay in the company. Time for layoffs should exclusively be used for knowledge transfer.

Friday, April 06, 2007

How to Build Good Software? Talk to people, especially the ones you don't know well.

Someone modified a simple launch script on a integration machine. This pissed off the author of the script. Why?

Just because the guy who modified the script never worked before with the author of the script. If only the author had been notified verbally or by mail of the modification, he would have been happy. Furthermore this would increase the quality of the change since the new guy might have made a change that has other impacts, that the author will best evaluate quickly.

How to Build Good Software? Talk to people, especially the ones you don't know well.

Someone modified a simple launch script on a integration machine. This pissed off the author of the script. Why?

Just because the guy who modified the script never worked before with the author of the script. If only the author had been notified verbally or by mail of the modification, he would have been happy. Furthermore this would increase the quality of the change since the new guy might have made a change that has other impacts, that the author will best evaluate quickly.

Thursday, April 05, 2007

Find Grep And Vi Keys Small Memo

I tend to forget this now and then to grep on a specific list of files:

find . -name "*.xml" | xargs grep "iwantthis"

And I also tend to forget the vi keys. Small extract:

h - move left one character
j - move down one line
k - move up
l - move right
$ - go to the end of the current line
0 - go to the beginning of the current line
G - go to the last line in the file
15G - go to line 15
control-F - forward one page
control-B - backwards one page
n (N) - next (previous) in search mode (/ or ? forward or backward)
s/OLD/NEW/g - replace on current line
%s/OLD/NEW/g - replace every occurence in file (or use 0,$ instead of %)

x - delete one character

Update I have to add the standard replace in multiple files via sed command. Here is an example of how to move your eclipse workspace to another directory:
  • find . -name "*.xml" | xargs sed -i "s,c:[/\\]java[/\\]eclipse,d:/eclipse302,gi"