Friday 20 July 2012

XP Revisited: Part 2 - Beck's Coming of Age

As mentioned in my last post, this weekend I got hold of, and finished, the second edition of "eXtreme Programming Explained", written by Kent Beck and Cynthia Andres.

For at least the last decade, this has probably been one of the most influential books to be placed in the hands of software developers and engineers. With it and its subject matter having been mainstream for a good while now and readers on Amazon stating how this edition of the book is very different to the original (and not always in a good way), I decided to read through the second edition to compare it to both what I remember about the first edition and how well or badly the software development field has applied these concepts in practise.

I read the first edition in 2002 but was totally unimpressed. The book was far too software developer focussed and called for practises which claimed to deliver software faster and at less cost which I felt would not automatically be the case, as it would introduce a significant amount of rework (aka Refactoring) at each stage. I thought this would do nothing to shorten a 'complete' release of software. If you consider one release of software in a 'big bang' and all the smaller releases of software in XP or agile methods in general, the amount of development is, of course, about the same to get a finished product. The benefits lay elsewhere, in areas such as risk and incremental value creation.

I am a big believer in software engineering and not 'craftsmanship' and at the time was a heavy user of formal methods and languages, such as UML, OCL, Z/VDM and RUP (Indeed, UML with RUP is still my preferred method or development, but with short iterative cycles, placing collections of use cases, analogous to stories and features in the hands of the business users at each release). Seeing as RUP is in itself an iterative method, I didn't think there was anything strange or unusual about XP doing this too.

Additionally, I had been using self-testing classes for a good few years by the point at which I was introduced to the DUnit testing framework for Delphi in late 2001. Self-testing classes, with methods segregated from the main bulk of the code in the same class by the use of compiler directives, allowed the software to test itself and obviously the class access to its own private and protected methods within 'Foo.Test()'.

There are a few drawbacks with this, don't get me wrong (such as needing to know to switch the configuration over or disable to compiler directive or more seriously, the need to create new tests each time you refactor if you wish to keep the advantage of private method testing) but the ability to test private/protected methods allowed much finer grained debugging that can be done when only testing the public methods if a class.

I spent a significant amount of time playing e-mail tennis with Ron Jeffries batting the XP ball around at the time. What surprised me was the effort he put in to critique the fairly small-fry elements of other methods. Indeed, sometimes, some of his comments seemed a little like critique for critique's sake. I still remember the conversation about sequence diagrams, where we discussed how to check things work according to the activity diagrams before building any code - Note, I have come to argue this can be considered a 'test first' activity. He used the statement "How do you tell the difference between a sequence diagram showing a whole system and a black cat at midnight?", which is a fantastic analogy, even though that is not what I was saying at all :-D I break sequences down into component and package level entities too, as the software is assumed to be loosely coupled, most scenarios which result in a high linkage between two packages can be seen to not be cohesive and you can tell the segmentation into those specific package is wrong. So there are as many ways to figure things out from models and diagrams as programmers can see in code.


I attempted discussing the pros and cons with many different people over the years but found no reason to say that XP or Agile methods in general was any better than the RUP process I was using. The lack of strong, cohesive, non-contradictory reasoning from anyone I discussed XP with over the years (that didn't appeal to the programmer) didn't help and indeed, for the first few years of the 'agile revolution' I could easily argue that a company's investment in, say, RUP or later Scrum would be a much better fit with existing corporate structure, at least initially. After all, the vast majority of companies are not developer led. The aim fo the majority of companies is not to develop software. So unless a company is willing to structure itself to segregate the development of software to a different organisational unit, including the use of transfer pricing for charging models (i.e. inter-departmental), then unmodified XP was a loser methodology in terms of adoption. This was not helped by Beck's insistence on the source code being the final arbiter in the first edition, which I felt was both narrow and small minded, as was a lot of the vehement statements in the first edition.


In the intervening years, the references to other fields that agile developers often quoted, without any analytical or empirical evidence to support their claims was startling (indeed, even the conjectures were very badly thought out and this is still the case today). They claimed to be lean, but don't understand it. They claimed to revel in the importance of Kanban, but again, don't understand it (ask pretty much any developer what the equation is for, say, a container/column size and the response you often get is "There's an equation?" *facepalm*). They quote religiously continuous improvement but don't measure anything (sometimes not even the points flow!?!), so have no idea if what they are doing is working and what caused it. Woe betide anyone who mentions alternative confounding paths or variables (most developers won't have a clue about factor analysis).


So all in, given the years of garbage I was often quoted by some development teams, I was fully up for a fight when reading the second edition. I got my pen and a notebook and noted down the reasoning for each of the salient points, including the values and principles (which I didn't have too many significant issues with the first time) and the new split of 13 primary and 11 corollary practises plus any comments he made that I didn't immediately agree with, so see how he addressed the reason for them as the book progressed.


Surprise!



What I ended up reading was a complete shock! The only way I can describe Beck's take on software development in the second edition is mature! Very mature! Having read the first edition, I started wanting to tear this work to pieces, but actually, with about a third of the book being rewritten, his slight U-turns on some of the things he presented in the first edition and his admission that he wrote the first edition of the book with the narrow focus of a software developer, increased his stature substantially in my eyes. 

So that leaves me to point the finger of software engineering mediocrity in this day and age firmly at the software developers themselves (indeed, Beck himself has criticised some of the claims by modern agile developers). If you read the second edition you will see what I mean. I shall cover some of the more salient points here over the next few blog posts, but I just wanted to say that if you have adopted XP from the first edition, then read the second edition! There is a whole world view you are missing out on.

0 comments:

Post a Comment

Whadda ya say?