I was a fan of the stories of Thor Heyerdahl — I read Kon-Tiki, Amon-Ra and also a book on the mystery of the huge statues on Easter Island. The scientific-adventurous style of the books is great. But what I will write about now is a little curiosity that stuck to my mind in the Ra story.
Ra was the Papyrus reed boat that served in one of Heyerdahl’s famous feats of experimental archaeology. He wanted to test the hypothesis that the ancient people would have had vessels capable of crossing the oceans. Earlier, with the balsa raft Kon-Tiki he could show that the ancient South Americans could have travelled all the way to Easter Island. And with Ra he intended to show that the old Egyptian reed boat design was capable of crossing the Atlantic ocean.
Heyerdahl had the reed boat built following the designs from the old Egyptian paintings. Many of the paintings showed a rope between the bottom and the upward-tilted back tip of the boat. Heyerdahl thought that the function of the rope was to force the back end of the boat curve upwards, but when he observed that the boat stayed in good shape even without the rope, he decided to leave it out. This proved to be a bad idea:
A decision to cut what proved to be a crucial line between the prow and the deck caused the stern to sink and the ship to increasingly list to one side. After almost two months afloat, and with a hurricane approaching, the Ra was abandoned at sea frustratingly short close to its goal, still floating but nearly uninhabitable.
(Donald P. Ryan, THE RA EXPEDITIONS REVISITED.)
Heyerdahl learned valuable lessons, built a new reed boat Ra 2, and crossed the Atlantic with it. In the realm of commercial software production, this might be equivalent for example with a quick and easy bug fix, or with a major financial failure, depending on various circumstances.
Obviously you would not be reading even this far if you did not find some interest in fabricated parallels between software and the real world, such as this: the mistake of Heyerdahl is like leaving the programmer testing out of XP. More generally, even good advice can turn against you, if you make a misinterpretation or false assumption, and end up not doing quite what the advisor was thinking about.
The extreme programming example goes to the heart of what I want to convey. If you take such a broad and complex holity as the extreme programming, and do not realise how the separate parts need the support of each other, you might inadvertently cut off a crucial line. This might well be provoked by an illusion of resource scarcity or other types of laziness, for which we have some great expressions in the Finnish language: rusinat pullasta (“only the raisins from the sweet buns“) and yli siitä, mistä aita on matalin (“crossing the fence from where it’s lowest”). We Finns pride ourselves on eating the boring dry doughy (domestic) part as well as the sweet (imported) raisins, and on crossing the fences at their highest parts to not look like slackers. The kind of slackery of not jumping that high when it’s not exactly necessary for getting on the other side is nowadays fashionable as effectiveness, and I think that it is exactly the pursuit of this kind of effectiveness in which you might go astray.
Already in the first edition of the “white” extreme programming book, Kent Beck underlined the fact that the different legs of XP support each other and that the process would fall down like a house cards if you left one wall out. (Without still having gotten around to read the second edition, I should mention that it’s most probably well worth a read.) Obviously he did not describe his process like a house of cards, but that’s the feeling that stuck in me. Later on, I think that Beck pretty much confessed that of the XP practices, at least test-driven development could stand alone, so even if it might not save your project, you might be able to do good work practicing TDD even in a project that does not embrace other XP practices. Though if refactoring would be limited politically, this might make it very diccicult to do TDD.
Of the other XP practices, I think that perhaps pair programming might fly alone as well. Without proper first-hand experience, my guess is not educated. But something I have been through is what happens when you take the XP path of leaving out the big design up front or indeed almost any design up front (which is already getting too extreme to be proper XP), but fail to implement automatic testing, merciless refactoring and pair programming. If this doesn’t result in a mess for you, you better play a round of lottery. It is exactly this kind of faulty shortcuts that give “XP” the bad reputation as glorified hacking.
Another, more subtle example of the intersupport of XP practices, is the crucial but easily overlooked third step of test-driven development: red – green – refactor. If you leave in the repository the first kludge that seems to work, you’re not actually doing TDD but causing trouble:
Once the tests pass the next step is to start over (you may first need to refactor any duplication out of your design as needed, turning TFD into TDD).
To keep your TDD boat afloat, you need to refactor both the production and test code continuously. It’s not enough to just write a test always before writing code. Another necessity is that all tests are run often and showing green, as maintaining an exhaustive suite of tests is a part of TDD.
Another example still from the XP / agile realm, but from a bit higher point of view, would be overconfidence on developer tests as a measure of progress. For me so far, customer-defined automatic tests or indeed any predefined customer tests have been but a dream. And to measure whether the software is close to complete, it is exactly the customer / user tests that matter. Even if you have good code coverage by programmer tests and you think you have implemented a good part of the required features, you must fight to get the feedback from the customer, or whoever in your environment would be closest to the real user of the system.
What I still saw, even with TDD, are misunderstandings between the project’s customers (also known as business experts or product owners) and the software developers. Even if the deployed code was almost bug-free, it didn’t do everything the customer had expected. We can enlarge on the concept of TDD and reduce the risk of delivering the ‘wrong’ code with what I call customer test-driven development.
— Lisa Crispin: “Using Customer Tests to Drive Development”
To make all these long stories one short one, I claim that you must not adopt a process or a method, and then leave out a crucial part, because then you might fail horribly. Am I then saying that you should adopt any good-looking process blindly and without criticism?
If you got this far and think so, I hope that you will come up with a general counterexample in the next few seconds 🙂
Well, of course I don’t advocate following processes blindly, because cargo-cult programming can be just as bad a problem as taking the faulty shortcuts. In fact I think that the problems are related — omitting crucial parts of a method can be seen as doing cargo-cult on the remaining parts, without fully understanding their dependencies and interactions.
So in the end my point here is not really about being lazy in adapting practices, but being lazy in thinking about your work. The steps on the journey of adapting a method or a process effectively can be seen as the stages in reading texts on a subject with which one becomes more familiar, and can thus move from restatement through description and interpretation to critical reading. To master a skill, you must have the passion, perseverance and ability to learn, so that you will build a couple more reed boats before you get one to cross an ocean with.
Leave a Reply