imported from Panda


Analysis in philosophy or logic is essentially cutting a conceptual whole to smaller pieces. This is easily connected to agile software development with its hierarchy of a software being analysed to releases, releases analysed to user stories, then to developer stories, and finally to developer tests. (Peter Schrier crystalised this to me in his March 2005 Agile Finland presentation (PDF).)

Software analysed

Robert C. Martin has written a post
“Analysis Vs. Design” where he makes the point that analysis and design in making software are just two different points of view on the same issue. So my word-play of “analytical design” really means exploring this idea in the context of programming (which I believe to be creating the software design). The first developer tests prompted by the tasks at hand can serve as top-level starting points for the analytical design of the actual software component being programmed.

There was also discussion on the TDD Yahoo group in November 2005 on what I find a symptom of this top-down design brought up by TDD. When you start from the top, you easily “bite off more than what you can chew.” When this happens, you must temporarily switch your focus to the smaller detail and test-drive that detail before returning to view the bigger picture. For example, if your task at hand needs a non-trivial String.replaceAll() call involving regular expressions containing metacharacters, you are better off pausing for a while and writing a test that just checks that your replaceAll() call does what you want. This is especially important when you are writing a slow integration test instead of a fast unit test, because if you can test the detail in a fast unit test, you’ll get feedback sooner, and better code coverage by unit tests.

The theme comes up in varying forms, such as the problem of “Mother, May I” tests of Tim Ottinger or Mika Viljanen figuring out what tests to write. In these situations, it clearly helps to make the bootstrap tests as close to the business requirements as possible, and then analyse towards the details. Sven Gorts has written a nice discussion explicitly comparing top-down and bottom-up TDD, and reading it reinforced my opinion that top-down TDD is the way to go.

So to make an example of this, I’m pretending to start to work on a Scrum tool. Let’s imagine that the most critical feature is to see the sprint burndown chart, so I’ll start the implementation with this simple test:

package scrumtool;import junit.framework.TestCase;

public class SprintBurndownTest extends TestCase {
    public void testRemainingIsSumOfRemainingOfTasks() {
        SprintBurndown chart = new SprintBurndown();
        Task t = new Task("Paint the burndown chart", 4);
        chart.add(t);
        assertEquals(4, chart.remaining());
    }
}

This prompts me to create new classes SprintBurnDown and Task, so I’ll do just that. With the quick fix / intention features of the IDE, it’s easy enough to fill in the Task constructor and the add as well as the remaining method of SprintBurndown.

I have a habit (that I believe I got from Jukka) of configuring my IDEs so that every generated method implementation just throws a new UnsupportedOperationException. So the IDE code completion only gets the test to compile, but test execution crashes on the second line on the exception thrown by the Task constructor. For now, I’ll just empty the methods, and put remaining() to return -1 because it needs to return something.

This gets me to this test failure that I wanted:

junit.framework.AssertionFailedError: expected: <4> but was: <-1>

So I make the simplest possible change to make the test pass:

package scrumtool;public class SprintBurndown {

    public void add(Task t) {
    }

    public int remaining() {
        return 4;
    }
}

And ta-da, we’re on green.

Notice that the implementation doesn’t do anything with the Task class. Task was only created because the best bootstrap test case that I came up with needed it. And it should be even more obvious that the current implementation of remaining() will fail miserably in more complex usage scenarios ;), which hints me that I might be correct in wanting a Task concept to help in dealing with that complexity. (Or, I might be mistaken, and I should have started without the Task class, for example just passing Strings and ints to SprintBurnDown.add(). Sorry if this bothers you, but this is the best and most real-world-resembling example that I could come up with.)

Despite good examples, I have not yet learned to thrive for having only one assertion per test, nor to use separate JUnit fixtures efficiently. Rather I want my tests to be good examples of what the code should do. So I will go on making my test method tell more of how the software under test should behave.

public void testRemainingIsSumOfRemainingOfTasks() {
    SprintBurndown chart = new SprintBurndown();
    Task t = new Task("Paint the burndown chart", 4);
    chart.add(t);
    assertEquals(4, chart.remaining());

    Task t2 = new Task("Task can be added to a sprint", 2);
    chart.add(t2);
    assertEquals(4 + 2, chart.remaining());
}

Happily this gives me just the failure I wanted:

junit.framework.AssertionFailedError: expected: <6> but was: <4>

And now to get the test pass, I really feel like I need to make use of Task. I want to add behaviour to Task test-driven; the problem of the burndown chart has been further analysed and we have encountered the Task class.

At this point, it might be a good idea to temporarily comment out the currently failing assertion, as in orthodox TDD there must be only one test failing at a time, and I am just about to write a new failing test for Task.

This is the new test for Task and the implementation that got it to succeed:

// TaskTest.java

package scrumtool;import junit.framework.TestCase;

public class TaskTest extends TestCase {

public void testRemainingIsInitiallyOriginalEstimate() {
        Task t = new Task("Tasks can be filtered by priority", 123);
        assertEquals(123, t.getRemaining());
    }
}

// Task.java

package scrumtool;

public class Task {
private int remaining;

public Task(String name, int estimation) {
        this.remaining = estimation;
    }

public int getRemaining() {
        return remaining;
    }
}

And after this, it was easy enough to make SprintBurndown so that the whole test passes:

package scrumtool;public class SprintBurndown {
private int remaining = 0;

public void add(Task t) {
        remaining  += t.getRemaining();
    }

public int remaining() {
        return remaining;
    }
}

Now the whole test passes! So I’ll clean up the test class a bit.

package scrumtool;import junit.framework.TestCase;

public class SprintBurndownTest extends TestCase {
    private SprintBurndown chart = new SprintBurndown();

    public void testRemainingIsSumOfRemainingOfTasks() {
        addTask("Paint the burndown chart", 4);
        assertEquals(4, chart.remaining());
        addTask("Task can be added to a sprint", 2);
        assertEquals(4 + 2, chart.remaining());
    }

    private void addTask(String name, int estimate) {
        Task t = new Task(name, estimate);
        chart.add(t);
    }
}

In case the point was lost in the midst of the many lines of code produced by a rather simplistic example, here it is again:

  1. Write your first programmer tests as high-level acceptance tests,
  2. and when making them pass, don’t hesitate to step to lower levels of analysis when encountering new non-trivial concepts or functionality that warrant their own tests.
Advertisements

I was very happy to take part in the coding dojo of 8 February 2006. The previous time I had attended was the first public session Helsinki in November, and compared to that, the recent dojo went considerably more smoothly:

  • the cooking stopwatch worked excellently for keeping each person’s turn at ten minutes, with one of the pair rotating every five minutes. (My personal goal for the next dojo is to learn how to set up the watch correctly ;))
  • the audience kept moderately quiet, and the questions were less suggestive than before — i.e. more questions, less directions

And again, I learned valuable things on how other people mould the code, think about the micro-level design, and write tests.

The word “tests” just above should disturb you, if you think that we are practising Test-Driven Development (TDD) in the dojos.

In the randori-style dojo, as a pair produces code, everybody watches it on the projected screen. Sometimes the audience gives slight signals of appraisal, especially when a pair successfully completes a feature, runs the tests, and the xUnit bar turns green. I wanted to cheer not only for the green but also for the red bars. People found this strange, which bothered me, but regretfully I forgot to bring this up in the dojo retrospective. Now I’ll explain why I like the red bar in TDD.

By cheering for the successful red bar, I wanted to underline that making the test fail the right way is clarifying a requirement in an executable form. As I dig deeper into TDD and waddle amidst comments like “I want the tests to drive the design of my application” or “I want my tests to tell stories about my code”, and lately also the new Behaviour-Driven Development (BDD) ideas, I’m staggering towards the comprehension that when doing TDD, we’re not supposed to write tests but to specify requirements.

I’m not sure if Behaviour-Driven Development adds to TDD something more than just the change of the mindset and the vocabulary, but my dojo experience got me thinking that this might be more important than what I had understood. Consider the following (Ruby poker hand evaluator test with Test::Unit):

  def test_four_of_a_kind
    hand = Hand.new ["As", "Ah", "Ad", "Ac", "5s"]
    assert_equal('four of a kind', hand.evaluate)
    hand = Hand.new ["As", "Ah", "Ad", "4c", "5s"]
    assert_not_equal('four of a kind', hand.evaluate)
  end

as opposed to (more or less the same with rSpec):

  def four_of_a_kind
    hand = Hand.new ["As", "Ah", "Ad", "Ac", "5s"]
    hand.should.evaluate_to 'four of a kind'
    hand = Hand.new ["As", "Ah", "Ad", "4c", "5s"]
    hand.should.not.evaluate_to 'four of a kind'
  end

For the record, for this rSpec version to work, I had to add this method to the Hand class:

  def evaluate_to?(hand)
      return evaluate == hand
  end

While Ruby and BDD might or might not be cool, the real point I want to make is that even without the BDD twist, TDD is about design. So what we should practice in a TDD dojo is how to design by writing executable specifications. I think that this is a fascinating, useful and non-trivial skill that is best being rehearsed when working on small and simple examples, such as the tennis scoring and poker hand evaluator which have been the assignments in the Helsinki dojo sessions so far.

Now we have been talking about getting more real-life kind of problems to the coding dojos, so that the participants could learn how to do TDD or at least programmer testing better in an everyday work environment with application servers, databases, networks and whatnot nuisances. Certainly such a hands-on session would accompany well the excellent books on the subject, and help people in adoption of developer testing, but I think that they would be more about the hands-on dependency-breaking or specific technology skills than design.

So although I welcome the idea of exercising in a more realistic setting, I hope that the randoris for doing simple katas will continue as well.

Speaking of dojos, you can now register for the next Helsinki area dojo on Wednesday March 15, 2006 at 18:00-21:00 in Espoo at the premises of Fifth Element. Let’s see if it will turn out more hands-on or micro-level design, but judging from the past experience, at least good time is guaranteed.

If you dig a ditch slowly, you make progress anyway. But when programming, even if hacking away a dirty solution might seem quick, it might backfire in hindering your progress in the long run. And when this happens, it wrecks the best intentions of agile project management to predict reliably.

Scrum is the best thing that happened to software project management since the invention of sweets in meetings. Scrum frees developers from excessive meetings and paperwork, and helps them to communicate their problems and feelings. And most importantly, the sprint burndown chart provides a good, empirical estimate on whether the project is on schedule or not.

It’s excellent that big software companies are publicising their use of Scrum.

Scrum is not a silver bullet, but rather like a good pair of scissors that does its limited job well. To succeed, Scrum needs several more things such as

  • good communication
  • good communication (worth repeating, don’t you think?)
  • a hard-working product owner that tells the developers about the wanted features and their relative priorities
  • a motivated, self-organising team
  • a scrum master that coaches and supports, without commanding and controlling
  • adequate tools for handling the product and sprint backlogs
  • good programmers

A real-world experience with Scrum with good “lessons learned” material can be found in the presentation (PDF) of Timo Lukumaa in the September 2005 Agile Finland session.

I will now dwell on my last point about Scrum — or indeed any agile project management — needing good programmers. This is by no way a novel nor original idea. For example Pekka Abrahamsson from the VTT of Oulu underlined that agile software development is not for beginners in his excellent “Agile Software Development” presentation in the inaugural Finland Agile Seminar of March 2005. But I must confess that I haven’t deeply accepted this view until recently.

I used to think that agile methods might help just about any project or programming feat, and specifically that even junior developers could benefit from them. And surely they can. But I will now try to explain why I conclude that an agile project needs skilled programmers to succeed.

At the heart of agile project management is the idea of linear, quantifiable progress: the velocity of the project can be measured by the rate of completing the wanted features, and this is used to predict the schedule. This is exactly the function of the sprint burndown chart of Scrum, mentioned above. The developers break each feature down to measurable tasks, estimate the relative sizes of the tasks, and the process tracks the rate of completing the tasks. And this is how you get data on the velocity of completing the features: if making the full text search is a task twice the size of making the login screen, and making the login screen took 17 hours, by all probability making the full text search will take 34 hours. Scrum rocks because it gives this kind of constantly-improving data with minimal effort.

But creating software is tricky. Craig Larman says that creating software is new product development rather than predictable or mass manufacturing, and this goes well in line with the famous thought of Jack Reeves that source code is design. Creating software is not like cutting a plank, where you could safely say that you are 50 % ready when you have reached the middle of the plank with your saw. It’s not like building a toy cottage where having the walls standing firmly and lacking the roof would show that you’re well on the way.

Instead, what we software developers develop is actually a collection of source code listings that do not serve for the end purpose of our work until they are built into machine code that is being executed in a computer. And to get there, the software must compile, run, and when running do useful things.

This is where the non-linearity comes in. When making more features, it is all too easy to break the features that were supposedly completed earlier. It’s very easy to break the source code in such a way that it does not even compile to executable form anymore. But most importantly, it’s too easy to grow the system in a way that increases the cost of changes by time.

At least since Kent Beck’s first 1999 edition of Extreme Programming Explained: Embrace Change the idea that the cost of change does not need to increase by time has been a central agile tenent. And Scrum seems to assume that the cost of change within the project remains constant, because there is not any kind of a coefficient for the velocity that would be a function of time. In my login screen / full text search example, it should not matter in which iteration of the project the full text search would be implemented.

Robert C. Martin (“Uncle Bob”) has written a great analysis of the problem of the increasing cost of change in his book Agile Software Development: Princples, Patterns, and Practices (2002). You can find what seems to be more or less the same text in his article “Design Principles and Design Patterns” (PDF). Here Uncle Bob neatly slices the awkward feeling of “this code is difficult to work with and expensive to change” to four distinct symptoms of rotting design: rigidity, fragility, immobility and viscosity. He then presents the principles of object-oriented design and demonstrates how they contribute to keeping the design of the software such that the cost of change is being controlled.

The more I learn about object-oriented programming, the less I feel I know. Reading Uncle Bob’s book was an experience similar to studying logic at the university: After the first couple of logic courses, I felt that I had mastered all that there was special to logic, being able to work with both syntactic and semantic methods of the first-order predicate logic, and vaguely aware that “some people do weird things with logic such as leaving out the proof by contradiction.” It was only when I understood the alternative and higher-order logic, proof theory and model theory a bit deeper, when I realised that I had just been climbing a little foothill, from the top of which one saw a whole unknown mountain range with a variety of peaks to climb, from Frege’s appalling syntax of predicate calculus via Hintikka’s modalities to Martin-Löf’s constructive type theory.

Making good software design — which, if you believe what Reeves writes, is the same thing as programming well — is difficult. It takes a lot of time, practice, reflection and study to advance in it. And there will be a lot of moments when a programmer overestimates her capabilities and writes something that she sees good but which turns out the be bad design, the debt of which the software incurs to be paid later.

Hence Scrum does not work in a vacuum: if your team does not program well, the progress is not linear as Scrum assumes. The design rots as the codebase grows, this slows down the development of new features and changing old features, and the predictions that Scrum provides start to err on the optimistic side.

I used to think that Uncle Bob’s Agile Software Development had a bad title despite being one of the most valuable programming books I know of. I thought that the subtitle Principles, Patterns, and Practices was a lot more indicative of the content. But with the line of thinking expressed here, I am coming to the conclusion that in the end, good software design is an intrinsic part of deloping software in the agile way, because it is necessary for controlling the cost of change over time.

I would be delighted if someone, possibly knowing more about Scrum, could show holes in my thinking. Meanwhile, I see no other option than practising to be a better programmer 🙂

There is a controller class that I’ll now call FooController. This was a Spring Controller that is used for several controller beans that could be called foo, fooBar and fooBaz:

  <bean name="foo" class="com.foo.web.FooController.class"/>
  <bean name="fooBar" class="com.foo.web.FooController.class"/>
  <bean name="fooBaz" class="com.foo.web.FooController.class"/>

Now for one of the beans, foo, we needed some other functionality, that we injected to the bean:

  <bean name="foo" class="com.foo.web.FooController.class">
    <property name="baz"><value><ref bean="theBaz"/></value></property>
  </bean>
  <bean name="fooBar" class="com.foo.web.FooController.class"/>
  <bean name="fooBaz" class="com.foo.web.FooController.class"/>

And we made changes accordingly to the FooController class, for the case that was needed in foo.

Then we started getting NullPointerExceptions from fooBar and fooBaz when they were executing the code of FooController that required baz, even though the views of fooBar and fooBaz did not need it.

So, even though Spring beans are by default singletons in the Spring sense, the Spring singleton sense is rather like Uncle Bob Martin’s Just Create One. Different beans having “singleton” instances of the same class will have their own instances, so providing the dependencies to the foo instance does not help the other instances.

Noticing this kind of errors earlier in the development is the reason why I prefer initialising the crucial dependencies of an object in the constructor, preferably making them final members. This way you cannot create an instance in a crippled state in the first place. Speaking fancy, this would be the “type 3” or constructor-based dependency injection. And for more complex dependencies I like context IoC as well.

Anyway, for a little simple problem like ours, we wanted to have a quick, simple solution. Here is the list of solutions we came up with. If you have more ideas I would be thrilled to hear them!

  1. Separate (sub)classes for each controller. Some day they will surely grow separate, but at this moment I felt awkward about littering the class space because of what is basically a configuration or dependency wiring issue.
  2. Making FooController a traditional singleton by putting factory-method="getInstance" to always return the same, statically held instance of the class and making the default constructor private. But this would have been both verbose in the Spring configuration XML and again littered our java code because of a wiring issue.
  3. Adding the dependecy wiring to each controller bean entry in the Spring XML. But this would have caused duplication in the XML.
  4. Making foo the parent bean of fooBar and fooBaz.

For now, we settled for the last solution that seems OK to me. Sometimes I have thought that the parent-child-hierarchy in the Spring configuration XML might make it too complicated, but in this case it feels justified and the configuration keeps looking fairly simple.

  <bean name="foo" class="com.foo.web.FooController.class">
    <property name="baz"><value><ref bean="theBaz"/></value></property>
  </bean>
  <bean name="fooBar" class="com.foo.web.FooController.class" parent="foo"/>
  <bean name="fooBaz" class="com.foo.web.FooController.class" parent="foo"/>

I was a fan of the stories of Thor Heyerdahl — I read Kon-Tiki, Amon-Ra and also a book on the mystery of the huge statues on Easter Island. The scientific-adventurous style of the books is great. But what I will write about now is a little curiosity that stuck to my mind in the Ra story.

Ra was the Papyrus reed boat that served in one of Heyerdahl’s famous feats of experimental archaeology. He wanted to test the hypothesis that the ancient people would have had vessels capable of crossing the oceans. Earlier, with the balsa raft Kon-Tiki he could show that the ancient South Americans could have travelled all the way to Easter Island. And with Ra he intended to show that the old Egyptian reed boat design was capable of crossing the Atlantic ocean.

Heyerdahl had the reed boat built following the designs from the old Egyptian paintings. Many of the paintings showed a rope between the bottom and the upward-tilted back tip of the boat. Heyerdahl thought that the function of the rope was to force the back end of the boat curve upwards, but when he observed that the boat stayed in good shape even without the rope, he decided to leave it out. This proved to be a bad idea:

A decision to cut what proved to be a crucial line between the prow and the deck caused the stern to sink and the ship to increasingly list to one side. After almost two months afloat, and with a hurricane approaching, the Ra was abandoned at sea frustratingly short close to its goal, still floating but nearly uninhabitable.

(Donald P. Ryan, THE RA EXPEDITIONS REVISITED.)

Heyerdahl learned valuable lessons, built a new reed boat Ra 2, and crossed the Atlantic with it. In the realm of commercial software production, this might be equivalent for example with a quick and easy bug fix, or with a major financial failure, depending on various circumstances.

Obviously you would not be reading even this far if you did not find some interest in fabricated parallels between software and the real world, such as this: the mistake of Heyerdahl is like leaving the programmer testing out of XP. More generally, even good advice can turn against you, if you make a misinterpretation or false assumption, and end up not doing quite what the advisor was thinking about.

The extreme programming example goes to the heart of what I want to convey. If you take such a broad and complex holity as the extreme programming, and do not realise how the separate parts need the support of each other, you might inadvertently cut off a crucial line. This might well be provoked by an illusion of resource scarcity or other types of laziness, for which we have some great expressions in the Finnish language: rusinat pullasta (“only the raisins from the sweet buns“) and yli siitä, mistä aita on matalin (“crossing the fence from where it’s lowest”). We Finns pride ourselves on eating the boring dry doughy (domestic) part as well as the sweet (imported) raisins, and on crossing the fences at their highest parts to not look like slackers. The kind of slackery of not jumping that high when it’s not exactly necessary for getting on the other side is nowadays fashionable as effectiveness, and I think that it is exactly the pursuit of this kind of effectiveness in which you might go astray.

Already in the first edition of the “white” extreme programming book, Kent Beck underlined the fact that the different legs of XP support each other and that the process would fall down like a house cards if you left one wall out. (Without still having gotten around to read the second edition, I should mention that it’s most probably well worth a read.) Obviously he did not describe his process like a house of cards, but that’s the feeling that stuck in me. Later on, I think that Beck pretty much confessed that of the XP practices, at least test-driven development could stand alone, so even if it might not save your project, you might be able to do good work practicing TDD even in a project that does not embrace other XP practices. Though if refactoring would be limited politically, this might make it very diccicult to do TDD.

Of the other XP practices, I think that perhaps pair programming might fly alone as well. Without proper first-hand experience, my guess is not educated. But something I have been through is what happens when you take the XP path of leaving out the big design up front or indeed almost any design up front (which is already getting too extreme to be proper XP), but fail to implement automatic testing, merciless refactoring and pair programming. If this doesn’t result in a mess for you, you better play a round of lottery. It is exactly this kind of faulty shortcuts that give “XP” the bad reputation as glorified hacking.

Another, more subtle example of the intersupport of XP practices, is the crucial but easily overlooked third step of test-driven development: red – green – refactor. If you leave in the repository the first kludge that seems to work, you’re not actually doing TDD but causing trouble:

Once the tests pass the next step is to start over (you may first need to refactor any duplication out of your design as needed, turning TFD into TDD).

Scott W. Ambler

To keep your TDD boat afloat, you need to refactor both the production and test code continuously. It’s not enough to just write a test always before writing code. Another necessity is that all tests are run often and showing green, as maintaining an exhaustive suite of tests is a part of TDD.

Another example still from the XP / agile realm, but from a bit higher point of view, would be overconfidence on developer tests as a measure of progress. For me so far, customer-defined automatic tests or indeed any predefined customer tests have been but a dream. And to measure whether the software is close to complete, it is exactly the customer / user tests that matter. Even if you have good code coverage by programmer tests and you think you have implemented a good part of the required features, you must fight to get the feedback from the customer, or whoever in your environment would be closest to the real user of the system.

What I still saw, even with TDD, are misunderstandings between the project’s customers (also known as business experts or product owners) and the software developers. Even if the deployed code was almost bug-free, it didn’t do everything the customer had expected. We can enlarge on the concept of TDD and reduce the risk of delivering the ‘wrong’ code with what I call customer test-driven development.

— Lisa Crispin: “Using Customer Tests to Drive Development”

To make all these long stories one short one, I claim that you must not adopt a process or a method, and then leave out a crucial part, because then you might fail horribly. Am I then saying that you should adopt any good-looking process blindly and without criticism?

If you got this far and think so, I hope that you will come up with a general counterexample in the next few seconds 🙂

Well, of course I don’t advocate following processes blindly, because cargo-cult programming can be just as bad a problem as taking the faulty shortcuts. In fact I think that the problems are related — omitting crucial parts of a method can be seen as doing cargo-cult on the remaining parts, without fully understanding their dependencies and interactions.

So in the end my point here is not really about being lazy in adapting practices, but being lazy in thinking about your work. The steps on the journey of adapting a method or a process effectively can be seen as the stages in reading texts on a subject with which one becomes more familiar, and can thus move from restatement through description and interpretation to critical reading. To master a skill, you must have the passion, perseverance and ability to learn, so that you will build a couple more reed boats before you get one to cross an ocean with.

We have gradually introduced a little Spring in the system I’m working on, and the results have been good. It’s getting easier to write testable code that can be read and modified efficiently, which is saving our customer money. (My account manager told me to end every paragraph of whatever I say or write in “saving money” or something such.)

However, the silver bullet of Spring does not come without feeding problems. A while back I was making a simple Spring singleton service that stored state that was periodically updated. The problem is that the state did not refresh. In despair, I fell to the despised resort of debug logging. This showed me that the internal state of the service was indeed refreshed, but it did not seem so when calling the service from outside. What?

At some point it occurred to me that maybe my singleton was not really a singleton, and that maybe I was refreshing something else than the object that the rest of the software was looking. And indeed a simple test showed that this was the case.

The Spring ApplicationContext was being refreshed in the bootup process, but unfortunately this was after non-Spring legacy code had acquired a reference to the original “singleton” service. Obviously the context refresh could not do anything to the references from outside the Spring application context, so we ended up with two copies.

Fortunately this one was easy to fix. I also added to the application context a simple bean that will throw an exception if it is initialized twice. This causes context reloads to fail with the exception, which might be a good thing as long as we have non-Spring code that will hold on to stale instances of Spring beans on refresh.

I am starting to write a weblog, or more accurately writing the first post that will hopefully have something that follows it, because writing about what I do and think is something that helps me a lot in reflecting on it in a productive way. This must happen a lot with us literal types that need to struggle to say a few a words, let alone intelligible ones, but will joyfully whack away a couple of several-hundred word emails on whatever triviality that makes us twinkle.

Writing is also the obvious form for a programmer to express what she does and thinks. A lot of what is traditionally seen as programming is effectively doing thinking writing, especially nowadays with the exploratory mock-powered test-driven development that allows one to refine her ideas on the module at hand by writing code. In the end of the day, writing code is a big part of what they expect from us. And because it’s important to think about what you write, it’s great when you can write in an exploratory way, thinking by writing. Perhaps word processors and digital cameras have directed natural-language writing and photography to similar directions as the modern software development tools have directed programming:

In reality, the act of writing the source code has never been the only necessary part of producing good software. The focus on people does it for me big time in the agile school of thought, albeit accelerating the feedback cycle on all different levels of actions is at least as important. I want to excel as a programmer, an emphasis that some much more highly esteemed software professionals underline, steering away from management and office applications towards coding and development tools. But as I believe that face-to-face communication is the best way to educate the programmers on requirements, I strive to keep close to the stakeholders even as a mere “coder”.

I intend to use this blog as a vessel for reflection on programming as a real-world activity.