My colleagues Markus Hjort and Marko Sibakov and I are holding a full-day training session on applying the JDave Behavior-Driven Development (BDD) framework for developing Apache Wicket applications. This will take place in the ApacheCon EU 2009 conference in Amsterdam on Tuesday 24 March from 10 to 17.30 (with lunch break in between).

The intended audience is Java developers that _are_ familiar with Wicket but want to know jdave-wicket, jdave-wicket-webdriver and / or learn how to do automated developer tests^wspecs for Wicket better.

We have posted some online material about the training and there is
a page on the session at the ApacheCon site as well.

Register soon — there’s only room for the first 30 🙂

If you have any questions or comments, please left a comment to the blog or email us directly!

Advertisement

We recently wrote a software using Java 1.4, which prevented us from using some nice developer testing tools that required at least Java 5 (also known as 1.5). Namely these were JDave that would have underlined that we really wanted to do behavior-driven development (BDD), and WicketBenchTestCase of Wicket Bench, which allows easy testing of individual Apache Wicket components.

When discussing this, our coworker Ville remarked that often it makes sense to put the tests of the project in a separate module (something that would be a submodule in Maven 2 multiproject project, and a separate Eclipse project or another module in IntelliJ IDEA). This way it would be trivial to use a different JDK for test and production code. Nevertheless we decided to keep it simple by doing everything in a single Java 1.4 module; the software was small and we felt that the marginal hassle of going from n to n + 1 modules is highest when n == 1.

We wanted to do system testing on a production-resembling platform as soon in the project as possible, but due to various circumstances out of our control, we only got there rather late in the project. And when we finally had deployed the software in the real application server, we were baffled by unexpected errors in some LDAP searches done by our application.

After a quick screening of differencies between our development environments and the system test environment, we tried putting the (proprietary) production JDK also to our development machines. For our great satisfaction, some of the various developer tests (unit + integration tests covering about 90 % of all program code lines, ahem! :)) started failing with the exactly same errors as in the test environment. So this was a case of the JRE abstraction leaking.

Now that the problem could easily be reproduced on any development machine, fixing it was rather straightforward. We even changed our Continuum server to use the production JVM + JRE, and verified that after the fixes, the software was still working as expected on the Java implementation originally used in development.

And then a while later it occurred to me what a good idea, albeit for wrong resons, it had been to write our developer tests on the same Java version as the target environment. If not, reproducing and fixing our bug might have been a lot more difficult.

This is a step-by-step guide for converting an old Enterprise Java Beans (EJB) application (EJB versions 1 and 2 in mind) to use Spring Framework and Hibernate.

The aim of this article is to tell in a practical and detailed way how this can be done. The question of why it is sometimes a good idea has been widely discussed, but nevertheless I address it briefly in an earlier post.

Please bear in mind that you always need to know what you are doing, and to adapt this procedure to your environment. Also, this kind of work should only be done when it clearly provides value. My experience has been that as with any refactorings or other functionality-preserving changes, this kind of improvement is best done in conjunction with changing or adding functionality, when the code needs to be touched upon anyway.

For the sake of the example, I created a little EJB 2 application and deployed it on JBoss. You can check the application out from or browse it in Subversion. For building and running the application, there are instructions in README.txt.

The main application PersonFinderApp is a command line program for searching persons in the database:

    public static void main(String... args) {
        if (args.length < 1) {
            throw new IllegalArgumentException(
            "Please supply a search term as a command-line parameter");
        }

        String searchTerm = args[0];
        List results = instance.findPersons(new SearchParameters(searchTerm));
        for (Person result : results) {
            try {
                out.println(MessageFormat.format("{0}, {1} {2} ({3})", result,
                    result.getFirstName(), result.getLastName(), result.getUserName()));
            } catch (RemoteException e) {
                throw new RuntimeException(e);
            }
        }
    }

This is done as follows:

  • PersonFinderApp gets a stateless session EJB PersonManager using container-managed transactions (CMT)
  • PersonManager invokes the finder method findBySearchTerm of PersonEntity EJB to get the results
  • PersonFinderApp outputs the results

PersonEntity is a container-managed (CMP) entity EJB. Its home interface provides methods for creating a new entity, finding an entity by its primary key, and finding an entity by a search string with EJB QL. The finder EJB QL is specified in ejb-jar.xml that contains all EJB configuration. The EJB container of JBoss handles the persistence of PersonEntity objects to a Hsqldb database embedded in JBoss.

So this simplistic example provides the two common cases to convert to non-EJB solutions: a stateless session bean that takes care of transaction management and service functionality, and an entity bean that handles database persistence of domain objects.

Prerequisites

Before doing anything, ensure that you have good automatic, in-container regression tests in place verifying that the EJB application works as expected. I have written these as JDave specifications. For example PersonFinderAppSpec.InContainer tests the main application which uses the deployed EJBs on the container:

public PersonFinderApp create() throws RemoteException {
    PersonDao dao = new ServiceLocatorImpl().getPersonDao();
    person = dao.create("appPerson-" + System.currentTimeMillis(), "Bill", "Apperson");
    return new PersonFinderApp();
}

public void destroy() throws RemoveException, RemoteException {
    person.remove();
}

public void findsPersonsFromDatabase() {
    List results = context.findPersons(new SearchParameters("Bill"));
    specify(results, contains(person));
}

There are also more fine-grained specifications.

Changing the stateless session bean to a Spring bean

I’ll start with the easier case of converting PersonManager to a normal Spring bean. First I’m adding a dependency to Spring 2.0.6 in the project pom.xml files and adding to the shiny new applicationContext.xml a Spring transaction manager bean that the new Spring bean will use. For this, I add these specification methods to ServiceLocatorImplSpec:

public void getsSpringApplicationContext() {
    specify(context.getApplicationContext(), does.not().equal(null));
}

public void getsSpringTransactionManagerFromApplicationContext() {
    ApplicationContext applicationContext = context.getApplicationContext();
    specify(applicationContext.getBean("txManager"), does.not().equal(null));
}

For the sake of completeness, I’m first making it use the JTA Transaction manager of JBoss. In the case of this little application, and also in many real-world-cases, transactions are not so complex that they would have both Spring beans and session EJBs operating within the same transaction. But if this happens, it’s good to know that Spring supports using JTA from the application server for its own declarative transaction management. In our example application that uses a remote client, there is a glitch that on client-side, you can only use UserTransaction because the actual JTA transaction manager is only available within the server. For server-side JTA transactions you would put

       <property name="transactionManagerName" value="java:/TransactionManager"/>

for the Spring txManager bean (JtaTransactionManager). The user transaction which we will use is configured like this:

  <bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager"
dependency-check="none">
    <property name="autodetectTransactionManager" value="false"/>
    <property name="userTransactionName" value="UserTransaction"/>

I also make a new ApplicationContextHolder class with acts as the glue code singleton for our legacy app. It takes care of creating the Spring application context from the XML configuration:

public class ApplicationContextHolder {
    private static ApplicationContext context;

    public static ApplicationContext getContext() {
        if (context == null) {
            initContext();
        }
        return context;
    }

    private static void initContext() {
        context = new ClassPathXmlApplicationContext("classpath:applicationContext.xml");
    }
}

ServiceLocatorImpl.getApplicationContext() just delegates to this class. The implementation is not thread safe, so it assumes that Spring context usage does not start on many threads at the same time.

To make things easier, PersonManager uses some business interfaces to abstract the EJB access. The business interfaces are PersonFinder and PersonDao. This is good practice and often refactorable to existing EJB applications, if not present. The only problem with old remote EJBs is that the method signatures get littered with throws RemoteException.

And now to changing the actual bean: I first create another specification method in PersonManagerSpec:

public void isFoundInSpringApplicationContext() {
    ApplicationContext applicationContext = new ServiceLocatorImpl().getApplicationContext();
    specify(applicationContext.getBean(PersonManager.ID, PersonManager.class), does.not().equal(null));
}

and it fails as expected:

org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'personManager' is defined

Now I remove EJBObject from the PersonManager extends list and make PersonManagerBean implement it directly, instead of SessionBean and the business interface PersonFinder (because PersonManager already extends PersonFinder). I also remove throws CreateException from the create method signature and handle it inside the method, because it now clashes with the signature of PersonDao.create() — but actually all EJB-specific throws clauses can now be removed.

PersonManagerBean still needs the @Transactional annotation to get declarative transaction management by Spring, and the annotation in turn requires

  <tx:annotation-driven transaction-manager="txManager"/>

in applicationContext.xml.

Now it’s time to add the actual Spring bean configuration for PersonManager:

  <bean id="personManager" class="org.laughingpanda.ejb_migration_example.ejb.PersonManagerBean"/>

IDEA now complains that I should disable dependency checking or set the property sessionContext. So I delete the redundant setter method, and all the remaining EJB API methods as well, from PersonManagerBean.

And now the test passes in the IDE! However, I still need to change ServiceLocatorImpl to use the new Spring bean, and the stateless session bean needs to be removed from ejb-jar.xml to get the EJB build pass, and to remove the previously deployed EJB.

So I delete the whole bean definition from ejb-jar.xml and change the JNDI lookup

private PersonManager getPersonManager() {
    Object objref = lookup("PersonManagerEJB");
    PersonManagerHome home = (PersonManagerHome) PortableRemoteObject.narrow(objref, PersonManagerHome.class);
    try {
        return home.create();
    } catch (Exception e) {
        throw new RuntimeException(e);
    }
}

to Spring application context lookup

private PersonManager getPersonManager() {
    return (PersonManager) getApplicationContext().getBean(PersonManager.ID);

and rebuild and deploy, and now everything works!

I now remove the unnecessary throws RemoteException clauses from the business interfaces, and their handling from PersonFinderApp.findPersons method, and after testing the application with IDE and Maven and from command-line, our work on PersonManager is done for now.

The procedure might seem involved, but is in fact almost mechanical. Because it must be done manually, it’s important to have automatic regression tests for the actual functionality, as well as to check the relevant functionality manually. For example classloader issues may arise because code might get moved from the EJB classloader to another one lower on the classloader hierarchy.

To recap, to change a session bean to a Spring-managed Java bean

  1. set up Spring and add the necessary service lookups or configurations there (transaction manager, datasource…)
  2. make Spring context available to your service locator via a singleton class holding the application context
  3. ensure that you have a regression test for the functionality that uses the relevant EJB, and run it to verify that it passes
  4. create a new, failing integration test for getting the (soon-ex-enterprise) bean from Spring context
  5. remove EJB-specific things from the local or remote EJB interface
  6. remove EJB-specific things from the bean class, and make it directly implement its interface
  7. add the bean, that is now a plain old java object (POJO), to Spring
  8. add Spring transaction management to the bean
  9. change the service locator to lookup the Spring bean instead of the EJB
  10. remove the old session bean from ejb-jar.xml
  11. do a clean build + deploy, run the regression test and correct errors if any
  12. verify that the real application still works as expected
  13. check that anything looks clean and remove any extra warts left behind, such as the old EJB home interface

When removing more EJBs after the first one, you skip the two first steps, as the Spring plumbing is already in place. But when the beans start to get dependencies on each other, you should change them to let Spring inject them instead of the Spring beans looking up other Spring beans via the service locator. Below there will be an example of this.

The current state of the software is in the 0.2 tag in Subversion.

Personmanager is now a Spring-managed POJO, even though it incidentally still uses the PersonEntity EJB. So that’s what we’ll turn to next.

Changing the entity bean to a POJO + Hibernate DAO

I start by putting Hibernate dependency to the pom.xml and updating it to the IDE configuration files. Then I make a failing integration spec for the soon-to-be Spring bean PersonDao

@RunWith(JDaveRunner.class)
public class PersonDaoSpec extends Specification {
    public class InSpring {
        public PersonDao create() {
            return null;
        }

        public void isFoundInApplicationContext() {
            ApplicationContext applicationContext = ApplicationContextHolder.getContext();
            PersonDao fromSpring = (PersonDao)
                    applicationContext.getBean(PersonDao.ID, PersonDao.class);
            specify(fromSpring, does.not().equal(null));
        }
    }

I then start coding a Hibernate implementation of PersonDao, namely PersonHibernateDao. To get some sense to the method public PersonEntity create(String userName, String firstName, String lastName) I make a simple java bean out of PersonEntityBean, making it implement PersonEntity directly. If you are using data transfer objects (DTO), as is common in EJB 1 and EJB 2 applications, they can be used as a basis for the domain object implementations.

I also remove extends EJBObject from PersonEntity. While implementing the create method, I notice that all methods in Person throw RemoteException, which can now be removed.

For now, I end up with a Hibernate DAO class like this:

    public PersonEntity create(String userName, String firstName, String lastName) {
        PersonEntity person = new PersonEntityBean();
        person.setUserName(userName);
        person.setFirstName(firstName);
        person.setLastName(lastName);

        Session session = sessionFactory.getCurrentSession();
        Long id = (Long) session.save(person);
        return (PersonEntity) session.get(PersonEntityBean.class,id);
    }

I realise that I have to change PersonEntitySpec to use the DAO, and add the remove method in PersonDao.

Then I add the new DAO and Hibernate configuration to the Spring configuration file:

  <jee:jndi-lookup id="dataSource" jndi-name="DefaultDS"/>
  <bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"
        dependency-check="none">
    <property name="configurationClass" value="org.hibernate.cfg.AnnotationConfiguration"/>
    <property name="annotatedClasses">
      <list>
        <value>org.laughingpanda.ejb_migration_example.PersonBean</value>
      </list>
    <property name="schemaUpdate" value="true"/>
    <property name="hibernateProperties">
        <prop key="hibernate.dialect">org.hibernate.dialect.HSQLDialect</prop>
        <prop key="hibernate.show_sql">false</prop>
        <prop key="hibernate.cglib.use_reflection_optimizer">false</prop>
        <prop key="hibernate.query.substitutions">true=1, false=0</prop>
        <prop key="hibernate.cache.provider_class">org.hibernate.cache.NoCacheProvider</prop>
    </property>

  <bean id="personDao" class="org.laughingpanda.ejb_migration_example.PersonHibernateDao"
          autowire="constructor"/>

and now the bean is found in Spring context.

Now, because the EJB classes are now longer EJB classes on the client, PersonEntitySpec starts failing. I need to change PersonManagerBean to use the new DAO instead of PersonEntityHome. So I do that, and add the rest of PersonEntityHome methods to PersonDao, and change ServiceLocatorImpl.getPersonDao() to return the personDao Spring bean.

When running the specs again, Hibernate tells us that there is no mapping for PersonEntityBean:

org.hibernate.MappingException: Unknown entity: org.laughingpanda.ejb_migration_example.ejb.PersonEntityBean

So I tag the relevant portions of PersonEntityBean with the JPA annotations and creating new lines to the database now works.

The finder method still needs to be implemented with Hibernate, for which I use a simple Hibernate Query Language (HQL) query

    public Collection findBySearchTerm(String searchTerm) throws FinderException, RemoteException {
        return getSession().createQuery("from PersonEntityBean person " +
                "where userName like ? or firstName like ? or lastName like ?")
                .setString(0, searchTerm).setString(1, searchTerm).setString(2, searchTerm)
                .list();
    }

and also make simple equals, hashCode and toString implementations. And now all tests pass again in the IDE!

Deploying the application still fails, because the entity bean needs to be removed from ejb-jar.xml. Empty enterprise-beans element would not be valid, so I end up deleting the whole file. This in turn makes the EJB build fail, so I remove the whole person-ejb module from the pom.xml files. Now the tests pass both from IDE and from Maven, and the project can be built and deployed. And it still works from the command-line too.

Now it’s time to clean up. First of all, there are a lot of unused throws clauses that I remove, and PersonEntityHome can be thrown away as well. While we’re at it, I also remove PersonMangerHome that I forgot when converting PersonManagerBean. PersonEntity interface is now redundant

public interface PersonEntity extends Person {
}

so I inline that as well. I don’t know if the Person interface makes any sense either, but leave it for now. I rename PersonEntityBean to PersonBean, though, and remove the ejbCreate and other EJB methods from it. It can now be used in PersonFinderAppSpec instead of the inner class PersonStub used until now.

PersonManagerBean still looks up personDao from ServiceLocator, so I replace that with autowiring by name now that personDao is a Spring-managed bean as well. This causes PersonManagerSpec to fail, so I make it get the bean from application context instead of constructing it by itself. But then again, PersonManagerBean doesn’t really do anything interesting other than delegate to personDao, except this method

public List findPersons(SearchParameters searchParameters) {
    try {
        String termWithWildcard = searchParameters.getSearchTerm() + "%";
        Collection rawResults = personDao.findBySearchTerm(termWithWildcard);
        List results = new ArrayList(rawResults.size());
        for (Object o : rawResults) {
            results.add((Person) o);
        }
        return results;
    } catch (Exception e) {
        throw new RuntimeException(e);
    }
}

so I just move the method to PersonHibernateDao, make PersonDao extend PersonFinder, and remove the whole PersonManager. This enties moving relevant PersonManagerSpec contents to PersonDaoSpec as well.

This can be done, because

  • Unlike the container-managed entity bean PersonEntity EJB, the new PersonHibernateDao is a POJO to which I can add arbitrary methods taking arbitrary kind of parameters — in this case, a finder method taking in a custom java class SearchParameters.
  • With the declarative transaction management of Spring, we can make the DAO itself transactional and customise its transaction attributes as needed (on Java 5, simply by adding the @Transactional annotation to it). Indeed my co-worker Joni has shown me that in many cases, you can just make the DAOs transactional, and use them directly from the client code. Separate services are only really needed when you need more complex transactions spanning across different transactional services, and even then it is often viable to use ad hoc transactions, for example with the handy TransactionTemplate from Spring.

After some cleanup, the whole PersonHibernateDao is quite lean.

So, to change an entity EJB to a Hibernate-persisted POJO:

  1. set up Hibernate
  2. ensure that you have a regression test for the functionality that uses the relevant EJB, and run it to verify that it passes
  3. create a new, failing integration test for getting the soon-to-be Hibernate DAO from Spring context
  4. remove EJB-specific things from the local or remote EJB interface
  5. create a POJO implementation of your domain objects, using the local or remote EJB interface or the DTO class as the base, and make sure to have good equals and hashCode implementations for it
  6. create the new Hibernate DAO, moving the relevant Home interface methods to it
  7. add the DAO to Spring
  8. change the service locator to lookup this new Spring bean instead of the EJB
  9. verify that tests using the DAO now fail because the domain object class is not mapped yet
  10. add Hibernate mapping for the domain object class, either by JPA annotations or manual .hbm.xml mapping files
  11. run the tests using the DAO and correct errors if any
  12. remove the entity EJB from the ejb-jar.xml
  13. do a clean build + deploy, run the regression test and correct errors if any
  14. verify that the real application still works as expected

Now that we don’t have any EJBs left, I also remove the whole person-ejb module. The current state of the project is in the 0.3 tag in Subversion.

Removing the rest of the container dependencies for easier development

The application still uses DataSource and transaction manager from the application server. It doesn’t use the EJB container any more, so it could already be run on an express version of a commercial application server, but requires the application server to be running to even run the integration tests.

So I take Apache Commons DBCP connction pool in use, and replace looking up the JBoss DataSource from JNDI with the commons-dbcp BasicDataSource. I also change the Spring transaction manager from the JTA transaction manager to HibernateTransactionManager.

  <bean id="txManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"/>
  <tx:annotation-driven transaction-manager="txManager"/>

  <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource">
    <property name="driverClassName" value="org.hsqldb.jdbcDriver"/>
    <property name="url" value="jdbc:hsqldb:hsql://localhost:1701"/>
    <property name="username" value="sa"/>
    <property name="password" value=""/>
    <property name="defaultAutoCommit" value="false"/>

In ServiceLocatorImpl, there was still a JNDI lookup for the DataSource, so I replace it with a lookup to the Spring ApplicationContext. I run the tests from IDE, rebuild the application and run the tests with maven, run the application, and everything still works.

But the real acid test of running the tests with JBoss shut down still fails, because Hsqldb is being run in server mode within JBoss. Fortunately separating settings between test and production mode is easy enough in Spring:

  <bean id="dataSourceProperties" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
    <property name="location" value="datasource.properties" />

  <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource">
    <property name="driverClassName" value="org.hsqldb.jdbcDriver"/>
    <property name="url" value="${db.url}"/>
    <property name="username" value="sa"/>
    <property name="password" value=""/>
    <property name="defaultAutoCommit" value="false"/>

I put the server mode configuration in the production version of datasource.properties, and the in-memory-version in the test source path, and now the tests run in the IDE! In Maven 2.0.7, the test classpath override doesn’t yet work, so for now Maven still requires the separate database instance. Anyway, all code may now be easily developed just within the IDE.

I also remove all JBoss and EJB references from the project now that they are not needed any more, and have a general look on what the application is like at the moment. At least PersonBean doesn’t need to be in the ejb subpackage any more, and the whole package can be removed as it is now empty. Then I tag the version 0.4 where I stop.

All questions, corrections and comments are most welcome. If you apply these instructions, be sure to have good regression tests, to move in little steps, and to know what you’re doing on each step. Happy EJB-hunting!

This article is dedicated to my workmate Markus Hjort.

I am about to publish a detailed hands-on guide for migrating a legacy Enterprise Javabeans (EJB) version 1 or 2 application to a Spring Framework + Hibernate stack. While the guide will concentrate on providing a concrete example of such a migration, I will here give a brief discussion on why and when such a migration would be justified.

For a more authoritative source, I recommend Rod Johnson’s excellent books such as J2EE Design and Development and J2EE Development without EJB. The Spring Framework grew out of Rod’s work outlined in J2EE Design and Development.

Now (in year 2007) everybody agrees that EJB specification versions 1 (from year 1998) and 2 (2001) have inherent problems. It is delightful that EJB 3 (2006) has addressed many shortcomings of the earlier versions, often in ways suggested by leading open source frameworks, though proprietary open source solutions keep on evolving more rapidly and maybe a bit more pragmatically than the standard.

A brief genealogy of the earlier EJB versions can be found in a good post by Floyd Marinescu. The article underlines perhaps the biggest problem of the legacy EJB versions: the EJB 1 specification aims to make every EJB component reusable, even remotely over a network connection. While this provides little value in most enterprise cases, it adds complexity and API clutter. To achieve the questionable goal, EJB components have their lifecycle completely managed by the EJB container, which makes their out-of-container use nearly impossible.

I remember how using EJBs in place of direct RMI seemd to make perfect sense when I was reading Enterprise JavaBeans (third edition) by Richard Monson-Haefel. I had just been working on an RMI project for some months, and like Monson-Haefel, thought I could see how EJBs might have made our project easier. Monson-Haefel’s definition of EJB was

Enterprise JavaBeans is a standard server-side component model for component transaction monitors. (p. 5)

But then I became slightly confused when seeing that in practice, session EJBs were often used on systems that either were or should have been collocated. And entity EJBs seemed stranger still, as the object-relational mapping of container-managed persistence (CMP) was very cumbersome, and bean-managed persistence did not seem to offer any added value over the traditional way of persisting POJOs with JDBC.

The old EJB approach also gave false promises of being able to perfectly automate and abstract some fundamental programming issues, much in line with the RAD tools promises of making programming trivial:

EJB is designed to make it easy for developers to create applications, freeing them from low-level system details of managing transactions, threads, load balancing, and so on. Application developers can concentrate on business logic and leave the details of managing the data processing to the framework.

“A beginner’s guide to Enterprise JavaBeans”, By Mark Johnson, JavaWorld.com, 10/01/98

More problems arise from the classloader scheme of EJB. While good in some cases, in a typical collocated, single-application case, the separation of EJB classes and their transitive dependencies to another classloader invisible to e.g. the web application classloader just makes development-time deployments slower than they would be without EJBs. This reduces developer productivity drastically.

Lately Test-Driven Development has made good progress in making its way to a recommended practice in software development. Old EJB standards have the problem that especially entity beans are impossible to construct outside the container, making unit testing hardly possible. Also using service lookups (from JNDI) instead of dependency injection makes unit testing EJB code complicated; consider

PersonDao mockDao = mock(PersonDao.class);
PersonFinderBean finder = new PersonFinderBean(mockDao);

versus

MockContextFactory.setAsInitial();
Context ctx = new InitialContext();
MockContainer mc = new MockContainer(ctx);
SessionBeanDescriptor dd = new SessionBeanDescriptor("java:comp/env/ejb/PersonDaoEjb",
    PersonDao.class, PersonDao.class, mock(PersonDao.class))) ;
mc.deploy(dd);
PersonFinderBean finder = new PersonFinderBean();

(I adapted the latter example from “Streamlining Your EJB Tests With MockEJB” from BEA dev2dev, by Eoin Woods and Alexander Ananiev 10/17/2005.)

These two kinds of inversion of control, service locators versus dependency injection, are discussed in the classic article by Martin Fowler and EJB 3 has adopted dependency injection in favor of JNDI lookups from client code.

Whereas the new EJB 3 specification fortunately addresses many of the problems of the easier versions, my hunch is that given no political constraints, Spring + Hibernate is still bound to be more productive, because of the greater flexibility and speed of development than EJB. For attaining a more objective view, there are a lot of more or less slanted EJB 3 and Spring comparisons around, such as

Two things are certain:

  1. EJB 1 and 2 have widely acknowledged problems. Whether or not these problems warrant reacting to them depends on the case. A working solution must not be tampered with just for the sake of it. Often it makes sense to improve it in conjunction with other changes, such as delivering new, useful functionality or fixing bugs.
  2. EJB 3 and Spring / Hibernate both provide an approach radically different from the EJB 1 or 2 world. If you suffered with EJB 2, you shouldn’t discard EJB 3 because of that. But it’s always worthwhile to check out proprietary, more independent open source solutions in addition to the standard.

I just read this in wicket-user mailing list:

In commercial ventures such as mine, I’m expected to demonstrate that any new technology…

2. Developers for which can be either be hired or trained easily and most importantly can be *average*. This means having lot of documentation and not having to go through source code to understand how things work.

and find it another example of a depressing problem in our profession. In how many other fields could a professional get away with only knowing how to write and not to read 🙂

Having the source code readily available and even patchable is a great advantage of open source. It being normal or even acceptable that programmers are not supposed to be able to read (good, non-reverse-engineered) source code is tragic.

I’m moving my blog from Laughing Panda to wordpress.com, as Laughing Panda stops doing blog hosting. It’s a pity but Pebble proved to be too much work for the admins who are anyway running Panda as a hobby. And anyway the real purpose of Panda is to provide the infrastructure for open source software projects.

Update: now all the old Panda posts should be here, with the original dates (though not necessarily times), and lamentably without comments, as I didn’t know how to import the comments in a reasonable way from the Pebble XML.

Analysis in philosophy or logic is essentially cutting a conceptual whole to smaller pieces. This is easily connected to agile software development with its hierarchy of a software being analysed to releases, releases analysed to user stories, then to developer stories, and finally to developer tests. (Peter Schrier crystalised this to me in his March 2005 Agile Finland presentation (PDF).)

Software analysed

Robert C. Martin has written a post
“Analysis Vs. Design” where he makes the point that analysis and design in making software are just two different points of view on the same issue. So my word-play of “analytical design” really means exploring this idea in the context of programming (which I believe to be creating the software design). The first developer tests prompted by the tasks at hand can serve as top-level starting points for the analytical design of the actual software component being programmed.

There was also discussion on the TDD Yahoo group in November 2005 on what I find a symptom of this top-down design brought up by TDD. When you start from the top, you easily “bite off more than what you can chew.” When this happens, you must temporarily switch your focus to the smaller detail and test-drive that detail before returning to view the bigger picture. For example, if your task at hand needs a non-trivial String.replaceAll() call involving regular expressions containing metacharacters, you are better off pausing for a while and writing a test that just checks that your replaceAll() call does what you want. This is especially important when you are writing a slow integration test instead of a fast unit test, because if you can test the detail in a fast unit test, you’ll get feedback sooner, and better code coverage by unit tests.

The theme comes up in varying forms, such as the problem of “Mother, May I” tests of Tim Ottinger or Mika Viljanen figuring out what tests to write. In these situations, it clearly helps to make the bootstrap tests as close to the business requirements as possible, and then analyse towards the details. Sven Gorts has written a nice discussion explicitly comparing top-down and bottom-up TDD, and reading it reinforced my opinion that top-down TDD is the way to go.

So to make an example of this, I’m pretending to start to work on a Scrum tool. Let’s imagine that the most critical feature is to see the sprint burndown chart, so I’ll start the implementation with this simple test:

package scrumtool;import junit.framework.TestCase;

public class SprintBurndownTest extends TestCase {
    public void testRemainingIsSumOfRemainingOfTasks() {
        SprintBurndown chart = new SprintBurndown();
        Task t = new Task("Paint the burndown chart", 4);
        chart.add(t);
        assertEquals(4, chart.remaining());
    }
}

This prompts me to create new classes SprintBurnDown and Task, so I’ll do just that. With the quick fix / intention features of the IDE, it’s easy enough to fill in the Task constructor and the add as well as the remaining method of SprintBurndown.

I have a habit (that I believe I got from Jukka) of configuring my IDEs so that every generated method implementation just throws a new UnsupportedOperationException. So the IDE code completion only gets the test to compile, but test execution crashes on the second line on the exception thrown by the Task constructor. For now, I’ll just empty the methods, and put remaining() to return -1 because it needs to return something.

This gets me to this test failure that I wanted:

junit.framework.AssertionFailedError: expected: <4> but was: <-1>

So I make the simplest possible change to make the test pass:

package scrumtool;public class SprintBurndown {

    public void add(Task t) {
    }

    public int remaining() {
        return 4;
    }
}

And ta-da, we’re on green.

Notice that the implementation doesn’t do anything with the Task class. Task was only created because the best bootstrap test case that I came up with needed it. And it should be even more obvious that the current implementation of remaining() will fail miserably in more complex usage scenarios ;), which hints me that I might be correct in wanting a Task concept to help in dealing with that complexity. (Or, I might be mistaken, and I should have started without the Task class, for example just passing Strings and ints to SprintBurnDown.add(). Sorry if this bothers you, but this is the best and most real-world-resembling example that I could come up with.)

Despite good examples, I have not yet learned to thrive for having only one assertion per test, nor to use separate JUnit fixtures efficiently. Rather I want my tests to be good examples of what the code should do. So I will go on making my test method tell more of how the software under test should behave.

public void testRemainingIsSumOfRemainingOfTasks() {
    SprintBurndown chart = new SprintBurndown();
    Task t = new Task("Paint the burndown chart", 4);
    chart.add(t);
    assertEquals(4, chart.remaining());

    Task t2 = new Task("Task can be added to a sprint", 2);
    chart.add(t2);
    assertEquals(4 + 2, chart.remaining());
}

Happily this gives me just the failure I wanted:

junit.framework.AssertionFailedError: expected: <6> but was: <4>

And now to get the test pass, I really feel like I need to make use of Task. I want to add behaviour to Task test-driven; the problem of the burndown chart has been further analysed and we have encountered the Task class.

At this point, it might be a good idea to temporarily comment out the currently failing assertion, as in orthodox TDD there must be only one test failing at a time, and I am just about to write a new failing test for Task.

This is the new test for Task and the implementation that got it to succeed:

// TaskTest.java

package scrumtool;import junit.framework.TestCase;

public class TaskTest extends TestCase {

public void testRemainingIsInitiallyOriginalEstimate() {
        Task t = new Task("Tasks can be filtered by priority", 123);
        assertEquals(123, t.getRemaining());
    }
}

// Task.java

package scrumtool;

public class Task {
private int remaining;

public Task(String name, int estimation) {
        this.remaining = estimation;
    }

public int getRemaining() {
        return remaining;
    }
}

And after this, it was easy enough to make SprintBurndown so that the whole test passes:

package scrumtool;public class SprintBurndown {
private int remaining = 0;

public void add(Task t) {
        remaining  += t.getRemaining();
    }

public int remaining() {
        return remaining;
    }
}

Now the whole test passes! So I’ll clean up the test class a bit.

package scrumtool;import junit.framework.TestCase;

public class SprintBurndownTest extends TestCase {
    private SprintBurndown chart = new SprintBurndown();

    public void testRemainingIsSumOfRemainingOfTasks() {
        addTask("Paint the burndown chart", 4);
        assertEquals(4, chart.remaining());
        addTask("Task can be added to a sprint", 2);
        assertEquals(4 + 2, chart.remaining());
    }

    private void addTask(String name, int estimate) {
        Task t = new Task(name, estimate);
        chart.add(t);
    }
}

In case the point was lost in the midst of the many lines of code produced by a rather simplistic example, here it is again:

  1. Write your first programmer tests as high-level acceptance tests,
  2. and when making them pass, don’t hesitate to step to lower levels of analysis when encountering new non-trivial concepts or functionality that warrant their own tests.

I was very happy to take part in the coding dojo of 8 February 2006. The previous time I had attended was the first public session Helsinki in November, and compared to that, the recent dojo went considerably more smoothly:

  • the cooking stopwatch worked excellently for keeping each person’s turn at ten minutes, with one of the pair rotating every five minutes. (My personal goal for the next dojo is to learn how to set up the watch correctly ;))
  • the audience kept moderately quiet, and the questions were less suggestive than before — i.e. more questions, less directions

And again, I learned valuable things on how other people mould the code, think about the micro-level design, and write tests.

The word “tests” just above should disturb you, if you think that we are practising Test-Driven Development (TDD) in the dojos.

In the randori-style dojo, as a pair produces code, everybody watches it on the projected screen. Sometimes the audience gives slight signals of appraisal, especially when a pair successfully completes a feature, runs the tests, and the xUnit bar turns green. I wanted to cheer not only for the green but also for the red bars. People found this strange, which bothered me, but regretfully I forgot to bring this up in the dojo retrospective. Now I’ll explain why I like the red bar in TDD.

By cheering for the successful red bar, I wanted to underline that making the test fail the right way is clarifying a requirement in an executable form. As I dig deeper into TDD and waddle amidst comments like “I want the tests to drive the design of my application” or “I want my tests to tell stories about my code”, and lately also the new Behaviour-Driven Development (BDD) ideas, I’m staggering towards the comprehension that when doing TDD, we’re not supposed to write tests but to specify requirements.

I’m not sure if Behaviour-Driven Development adds to TDD something more than just the change of the mindset and the vocabulary, but my dojo experience got me thinking that this might be more important than what I had understood. Consider the following (Ruby poker hand evaluator test with Test::Unit):

  def test_four_of_a_kind
    hand = Hand.new ["As", "Ah", "Ad", "Ac", "5s"]
    assert_equal('four of a kind', hand.evaluate)
    hand = Hand.new ["As", "Ah", "Ad", "4c", "5s"]
    assert_not_equal('four of a kind', hand.evaluate)
  end

as opposed to (more or less the same with rSpec):

  def four_of_a_kind
    hand = Hand.new ["As", "Ah", "Ad", "Ac", "5s"]
    hand.should.evaluate_to 'four of a kind'
    hand = Hand.new ["As", "Ah", "Ad", "4c", "5s"]
    hand.should.not.evaluate_to 'four of a kind'
  end

For the record, for this rSpec version to work, I had to add this method to the Hand class:

  def evaluate_to?(hand)
      return evaluate == hand
  end

While Ruby and BDD might or might not be cool, the real point I want to make is that even without the BDD twist, TDD is about design. So what we should practice in a TDD dojo is how to design by writing executable specifications. I think that this is a fascinating, useful and non-trivial skill that is best being rehearsed when working on small and simple examples, such as the tennis scoring and poker hand evaluator which have been the assignments in the Helsinki dojo sessions so far.

Now we have been talking about getting more real-life kind of problems to the coding dojos, so that the participants could learn how to do TDD or at least programmer testing better in an everyday work environment with application servers, databases, networks and whatnot nuisances. Certainly such a hands-on session would accompany well the excellent books on the subject, and help people in adoption of developer testing, but I think that they would be more about the hands-on dependency-breaking or specific technology skills than design.

So although I welcome the idea of exercising in a more realistic setting, I hope that the randoris for doing simple katas will continue as well.

Speaking of dojos, you can now register for the next Helsinki area dojo on Wednesday March 15, 2006 at 18:00-21:00 in Espoo at the premises of Fifth Element. Let’s see if it will turn out more hands-on or micro-level design, but judging from the past experience, at least good time is guaranteed.

If you dig a ditch slowly, you make progress anyway. But when programming, even if hacking away a dirty solution might seem quick, it might backfire in hindering your progress in the long run. And when this happens, it wrecks the best intentions of agile project management to predict reliably.

Scrum is the best thing that happened to software project management since the invention of sweets in meetings. Scrum frees developers from excessive meetings and paperwork, and helps them to communicate their problems and feelings. And most importantly, the sprint burndown chart provides a good, empirical estimate on whether the project is on schedule or not.

It’s excellent that big software companies are publicising their use of Scrum.

Scrum is not a silver bullet, but rather like a good pair of scissors that does its limited job well. To succeed, Scrum needs several more things such as

  • good communication
  • good communication (worth repeating, don’t you think?)
  • a hard-working product owner that tells the developers about the wanted features and their relative priorities
  • a motivated, self-organising team
  • a scrum master that coaches and supports, without commanding and controlling
  • adequate tools for handling the product and sprint backlogs
  • good programmers

A real-world experience with Scrum with good “lessons learned” material can be found in the presentation (PDF) of Timo Lukumaa in the September 2005 Agile Finland session.

I will now dwell on my last point about Scrum — or indeed any agile project management — needing good programmers. This is by no way a novel nor original idea. For example Pekka Abrahamsson from the VTT of Oulu underlined that agile software development is not for beginners in his excellent “Agile Software Development” presentation in the inaugural Finland Agile Seminar of March 2005. But I must confess that I haven’t deeply accepted this view until recently.

I used to think that agile methods might help just about any project or programming feat, and specifically that even junior developers could benefit from them. And surely they can. But I will now try to explain why I conclude that an agile project needs skilled programmers to succeed.

At the heart of agile project management is the idea of linear, quantifiable progress: the velocity of the project can be measured by the rate of completing the wanted features, and this is used to predict the schedule. This is exactly the function of the sprint burndown chart of Scrum, mentioned above. The developers break each feature down to measurable tasks, estimate the relative sizes of the tasks, and the process tracks the rate of completing the tasks. And this is how you get data on the velocity of completing the features: if making the full text search is a task twice the size of making the login screen, and making the login screen took 17 hours, by all probability making the full text search will take 34 hours. Scrum rocks because it gives this kind of constantly-improving data with minimal effort.

But creating software is tricky. Craig Larman says that creating software is new product development rather than predictable or mass manufacturing, and this goes well in line with the famous thought of Jack Reeves that source code is design. Creating software is not like cutting a plank, where you could safely say that you are 50 % ready when you have reached the middle of the plank with your saw. It’s not like building a toy cottage where having the walls standing firmly and lacking the roof would show that you’re well on the way.

Instead, what we software developers develop is actually a collection of source code listings that do not serve for the end purpose of our work until they are built into machine code that is being executed in a computer. And to get there, the software must compile, run, and when running do useful things.

This is where the non-linearity comes in. When making more features, it is all too easy to break the features that were supposedly completed earlier. It’s very easy to break the source code in such a way that it does not even compile to executable form anymore. But most importantly, it’s too easy to grow the system in a way that increases the cost of changes by time.

At least since Kent Beck’s first 1999 edition of Extreme Programming Explained: Embrace Change the idea that the cost of change does not need to increase by time has been a central agile tenent. And Scrum seems to assume that the cost of change within the project remains constant, because there is not any kind of a coefficient for the velocity that would be a function of time. In my login screen / full text search example, it should not matter in which iteration of the project the full text search would be implemented.

Robert C. Martin (“Uncle Bob”) has written a great analysis of the problem of the increasing cost of change in his book Agile Software Development: Princples, Patterns, and Practices (2002). You can find what seems to be more or less the same text in his article “Design Principles and Design Patterns” (PDF). Here Uncle Bob neatly slices the awkward feeling of “this code is difficult to work with and expensive to change” to four distinct symptoms of rotting design: rigidity, fragility, immobility and viscosity. He then presents the principles of object-oriented design and demonstrates how they contribute to keeping the design of the software such that the cost of change is being controlled.

The more I learn about object-oriented programming, the less I feel I know. Reading Uncle Bob’s book was an experience similar to studying logic at the university: After the first couple of logic courses, I felt that I had mastered all that there was special to logic, being able to work with both syntactic and semantic methods of the first-order predicate logic, and vaguely aware that “some people do weird things with logic such as leaving out the proof by contradiction.” It was only when I understood the alternative and higher-order logic, proof theory and model theory a bit deeper, when I realised that I had just been climbing a little foothill, from the top of which one saw a whole unknown mountain range with a variety of peaks to climb, from Frege’s appalling syntax of predicate calculus via Hintikka’s modalities to Martin-Löf’s constructive type theory.

Making good software design — which, if you believe what Reeves writes, is the same thing as programming well — is difficult. It takes a lot of time, practice, reflection and study to advance in it. And there will be a lot of moments when a programmer overestimates her capabilities and writes something that she sees good but which turns out the be bad design, the debt of which the software incurs to be paid later.

Hence Scrum does not work in a vacuum: if your team does not program well, the progress is not linear as Scrum assumes. The design rots as the codebase grows, this slows down the development of new features and changing old features, and the predictions that Scrum provides start to err on the optimistic side.

I used to think that Uncle Bob’s Agile Software Development had a bad title despite being one of the most valuable programming books I know of. I thought that the subtitle Principles, Patterns, and Practices was a lot more indicative of the content. But with the line of thinking expressed here, I am coming to the conclusion that in the end, good software design is an intrinsic part of deloping software in the agile way, because it is necessary for controlling the cost of change over time.

I would be delighted if someone, possibly knowing more about Scrum, could show holes in my thinking. Meanwhile, I see no other option than practising to be a better programmer 🙂

There is a controller class that I’ll now call FooController. This was a Spring Controller that is used for several controller beans that could be called foo, fooBar and fooBaz:

  <bean name="foo" class="com.foo.web.FooController.class"/>
  <bean name="fooBar" class="com.foo.web.FooController.class"/>
  <bean name="fooBaz" class="com.foo.web.FooController.class"/>

Now for one of the beans, foo, we needed some other functionality, that we injected to the bean:

  <bean name="foo" class="com.foo.web.FooController.class">
    <property name="baz"><value><ref bean="theBaz"/></value></property>
  </bean>
  <bean name="fooBar" class="com.foo.web.FooController.class"/>
  <bean name="fooBaz" class="com.foo.web.FooController.class"/>

And we made changes accordingly to the FooController class, for the case that was needed in foo.

Then we started getting NullPointerExceptions from fooBar and fooBaz when they were executing the code of FooController that required baz, even though the views of fooBar and fooBaz did not need it.

So, even though Spring beans are by default singletons in the Spring sense, the Spring singleton sense is rather like Uncle Bob Martin’s Just Create One. Different beans having “singleton” instances of the same class will have their own instances, so providing the dependencies to the foo instance does not help the other instances.

Noticing this kind of errors earlier in the development is the reason why I prefer initialising the crucial dependencies of an object in the constructor, preferably making them final members. This way you cannot create an instance in a crippled state in the first place. Speaking fancy, this would be the “type 3” or constructor-based dependency injection. And for more complex dependencies I like context IoC as well.

Anyway, for a little simple problem like ours, we wanted to have a quick, simple solution. Here is the list of solutions we came up with. If you have more ideas I would be thrilled to hear them!

  1. Separate (sub)classes for each controller. Some day they will surely grow separate, but at this moment I felt awkward about littering the class space because of what is basically a configuration or dependency wiring issue.
  2. Making FooController a traditional singleton by putting factory-method="getInstance" to always return the same, statically held instance of the class and making the default constructor private. But this would have been both verbose in the Spring configuration XML and again littered our java code because of a wiring issue.
  3. Adding the dependecy wiring to each controller bean entry in the Spring XML. But this would have caused duplication in the XML.
  4. Making foo the parent bean of fooBar and fooBaz.

For now, we settled for the last solution that seems OK to me. Sometimes I have thought that the parent-child-hierarchy in the Spring configuration XML might make it too complicated, but in this case it feels justified and the configuration keeps looking fairly simple.

  <bean name="foo" class="com.foo.web.FooController.class">
    <property name="baz"><value><ref bean="theBaz"/></value></property>
  </bean>
  <bean name="fooBar" class="com.foo.web.FooController.class" parent="foo"/>
  <bean name="fooBaz" class="com.foo.web.FooController.class" parent="foo"/>