Sunday 16 October 2011

Mistaken Myths: Number 2. You can document in agile methods once the software is done

This is a classic and a fairly easy one to dismiss.


In my younger IT days, late 2001 to early 2002, when Agile methods started to come to the fore in the UK job market, there was quite the fanfare from the people that used them. Talking specifically about XP, I remember e-mail discussions with one of the founders Ron Jeffries. The XP proponents around me in the company I was contracted to, claimed you never wrote documentation. Coming from a background where documentation was a key deliverable for transition at lots of stages in any 'heavyweight' process (RUP/Waterfall), I could not see how adequate communication from one stage to the next could take place without everyone being on the same page.


I didn?t know a great deal about agile methods at the time, so I ended up having to get in contact with the XP brigade and it was Ron who replied. I communicated my concerns about this to him, but he replied that it wasn't that you didn't write any technical documents, just that you would write them at the end of the deliverable of the story or release (aka in sprints and releases these days). That way you know what you have delivered and the document is as up-to-date as it can be. I will paraphrase what I remember he said, but apologies to Ron if I have incorrectly recalled the events of 10 years ago.


He cited reasons that included:


  1. Nobody reads documents. Clients don't read them and developers certainly don't read them.

  2. Nobody updates documents. They have a nasty habit of being written once, something changes and the document becomes obsolete.

  3. During a development phase, requirements are not immutable. So changes to the model may need to be made on the fly. If you have to change a diagram on a static document before you start to write your code, then given no-one will read it, you are wasting time writing a redundant deliverable from the perspective of delivering code.

  4. The tests should be your documentation.

  5. From the code, it should be self-evident what the code is meant to do, as that is the final arbiter. In other words, the code should be your documentation.
    Before I comment on this, I have to say that it was a decade ago. Ron is a capable professional and whilst our e-mail exchanges back then saw us pretty much at philosophical loggerheads, there are elements of his philosophy and those of agile methods in general that have brought nothing but good to the industry in that time.
    However, as with all agile processes, it is the people that make it happen. If you can't get the people, the company culture is against it and the business doesn't have buy in, then it is destined to fail and you will come up against an immutable wall of a different kind.

In the UK, many organisations fail to implement agile methods properly and pay the Cobb's Paradox price as a result. I will hopefully come to what I see as some of those reasons in a later blog, but to concentrate on the above, we just need to mention that, certainly in UK IT, people HATE documenting with a passion. Also, most hate writing unit tests first or carrying out spikes, but that is another story. It is regarded as overly bureaucratic. The majority of industry developers want to hack around with deliverable code in a wholly amateurish way and quite a lot of organisations out there simply let them do it. So when someone comes along and says "You don't have to write anything until the end, but it will improve code quality", they are predisposed to read this as "You don't have to write up anything and you become better" and any method advocating this gains a lot of traction very quickly.


In the early days of agile adoption, a customer's resource had to be available on site to sit with you whilst you went through the story and coded the corresponding result. In the UK, given the amount of free time nobody has, this was always going to be an impossibility for most organisations. So you would often get an electronic paper tennis game leading back and forth to customers who didn't have time to come into the software house to deal with things.


This would often happen with one developer as contact, who like all developers of the day, kept him-or-herself and his-or-her e-mail in his-or-her silo and the e-mails never came to the fore. The decisions should have been picked up by their pair programming buddy, but if he-or-she didn't know the conversation had taken place (having been on leave for a couple of days and the first programmer pairing with someone else), it wouldn't cross their mind. The programmer that received the decision should have put it somewhere for traceability and written it up, but they could cite "You don't have to write any documentation" and a period of time later, they would complain to the IT manager in a panic when the e-mail sweeper auto-deleted their e-mail, hours before the critical moment the big boss was asking in an enraged voice "who told you to do that?"


In any case, it was never brought to review, so knowing how well whatever was done couldn't happen due to the lack of traceability. At the time, sprint backlogs and Kanban didn't have the prominence they have now.


A much bigger issue that is still around today is what happens when all this tribal memory and wiped down whiteboard work floating around in peoples' heads decides to leave before it is written up? A previous client once had a number of cynical developers leave in a very short space of time to disparate organisations which offered better pay and no unit-testing or development rigor. Them leaving caused a resulting melee and manifesting chaos as people had to pick up code they knew nothing about and run with it. The handovers were not sufficient, process and policy documents were not readily available, there were certainly no specificaition, the unit tests had little coverage (so there was no confidence in them), the code didn't have comments (which was one of the enforcements of agile processes that those particular developers loved) and there were absolutely no traces of documents or e-mail trails as the e-mails were lost when the accounts were eventually purged.


This is easily my biggest gripe about agile philosophies that advocate no up-front documents. Everything is stored in tribal memory and that information gets disseminated in the dynamics of organisational culture. Everything has to be in place for that dissemination to take place. This sort of dissemination is very delicate, as even the placement of walls in a building can stop the inter-cultural flow of memes.


To characterise this, one team learns something new, places it on their project portal (probably via an electronic document ;-), but with no link to the project portal from any other SharePoint site say, or development wiki, nobody ever sees it. As a result, time passes, other developers spend their time re-exploring that wheel not knowing that others in the company have done this work already, wasting another x amount of time wheel-inventing. This can happen four or five time and if key people leave in the meantime, you not only lose the information about the role, you may also lose the fact that you learned that lesson before. So the organisation is not learning. Drawing analogies with the human brain, it is like losing the brain-cell just before it has even started to make the connection to others.


Personally, I think there are some very easy solutions to the problems highlighted by Ron above. However, they are seemingly incongruent with the views of some misguided agile proponents. Ironically, a lot of these were conceptually presented as part of the CASE and MDD/MDA schools of thought decades ago, but a certain 'biggest' name industry leader has only just caught on ;-)


So some of my solutions to the above are:


  1. Keep the documents concise, but get them done! - If you have to present requirements to clarify the project backlog elements, then do so by presenting a list of them with a click through to the full page detailing the deliverable citing success criteria, the stakeholder, the description etc. This can be in the form of a SharePoint list linked to the actual parent work item in TFS. Customise if you have to. The amount of time this will save the organisation you would not believe!

  2. In Windows, use OLE to link to Visio documents - At this moment, standard VS2010 class diagrams cannot adequately be OLE linked to Word documents, but Visio will do the job nicely. I have created a screencast and posted it below to show how this can be done. When used to link documents, this gets around the problem of the document becoming obsolete if the only the model is kept up-to-date.






  3. Take a long hard look at customer approval processes - The excuse that nobody reads the documents is true, but it won't be solved with, say, having the customer in the room without resistance and to some degree, I agree with them. I will come to the financial reasons for this in another blog, as that is a doozy!

  4. Make sure your tests are complete and consistent! - In agile methods with no tested designs (usually presented in documents), more than any other class of methodology, this becomes the biggest single point of failure that can exist in the SDLC. If your tests are incomplete, inconsistent or just plain wrong, then you are risking the entire project, because you cannot validate the design and code at any point. You must know how many tests you will need! QA members should come down hard on developers who miss this. We have failed to self-regulate in the development world, so it is time to play the ?self-regulate or legislate? card.

  5. If nobody reads documents, information will be lost - There are two parts to communication in general, there is that someone transmits a message and someone receives it. Without both those actors in the system, no communication has taken place?. Ooh! I hear a tree falling in a wood!... The communication is almost never a 60?s Chomskian ideal, but at the same time, the amount of beneficial lessening of ambiguity, and decision traceability that up-to-date documents give anyway, cannot be substituted if a project manager has to get and install a license of VS2010 to get the code and run a badly ordered set of the test, with no comments to see what the system does. A set of TFS burn down charts, test profilers, build failure WebParts are a useful statistical toolset, don?t get me wrong, but it compares the code against, say, tests, test coverage, developer work rates but you can?t guarantee the tests are correct from the specifications. Additionally, the information about why they came about (and the business criteria they are testing) can be been lost if a badly written work item is put in initially. TFS gives you the tools to do it properly, but unfortunately, almost no organisation that I have worked in recently does.

  6. Relate meaningful method names, properties, constants, fields and variables to the Business Domain Specific Language - This should be standard practise. It is not enough to use some of these domain specific names without a glossary or definition reference (which is a document). The business knows what these terms mean. As developers, we almost never know what the business means. So it is important to use the DSL of the business and this involves learning what those meanings are. Then once members of the organisation learn something and document it, the rest of the organisation can pick that up and run with it. It is part of a microclimate of internet blogging and group consultation (in the form of dojos and group katas). For blogging, SharePoint gives you that ability, so use it.


Given all this, and more, it cannot be understated that iteration zero is very important! Those foundations will set you up for the entire project?s success or failure, so GET THEM RIGHT!!


The underlying theme in a lot of this is that all that has happened in the decade or so since XP took prominence is that our definition of a document has changed. We have not eradicated them (or even the paper that comes with the, that was donated by the falling trees), far from it. Correctly applied documents have embedded themselves in the best development cultures but the just have stopped being called specifications.


The thing is none of this is new. These are lessons that were learned 20 or 30 years ago, but this time, the people have a greater proportion of the responsibility (but often don?t know it). So you have to get the right people.


Even though I prefer heavyweight approaches, I still see the immense value good, solid, QA; automated unit testing; automated builds; continuous integration; knowledge sharing via dojos etc. have as individual elements that can be applied to heavier methods. Indeed, since coming across the automated xUnit philosophy in 2001, I had used DUnit in every system that I developed in the interim before before shifting to .NET and picking up the VS suite. This is a must as far as companies are concerned. Good quality unit tests as good as guarantee the quality of the testing is solid, as it is repeatable and can isolate individual areas of the problem space to look at should something go wrong. But this doesn?t detract from the fact that if test are wrong, without traceable documents, you have no idea if it is the test that is wrong, the code that is wrong, if someone misinterpreted an ambiguous statement from the client, or the client didn?t understand the question. It is the glue between these different flowing elements of the Kanban board and without them, we cannot validate or verify anything.

Wednesday 12 October 2011

Mistaken Myths: Number 1. Your code should be self documenting

This is a fairly classic misunderstanding, yet one that is spouted a lot in the industry.

It seems pervasive that developers shoud produce self describing/documenting code. To a degree, I agree with this, but my viewpoint is that comments in code should lead to WHY something is being done and not reference how. More a specification of the method than how it does what it does without understanding what the start and end goals are (success criteria if you want to put it in TDD terms). This is a view shared by a few member of the industry such as those following in the footsteps of "The Pragmatic Programmer" and not always those only blindly following what I keep getting told "Code Complete" recommends (Note, I have not read the book myself, so can't comment on if it is the book that actually recomemnds this, or the person reading the book that does :-)

Decision traceability is something that is almost impossible to get from code alone. So some adjunct mechanism is often used, like referencing JAD/JRP sessions, original work requests/bug reports/version control logs in larger organisations, or simply a log book/notepad in smaller ones. Some developers choose to comment their code, but the comments have to be good quality. A comment that is not updated has the same productivity loss as the loss of 'tribal memory' (as Grady Booch often puts it) regarding an application (Which I will get to in a later post), so the comments should be treated with the same respect as actual code, especially if you are generating documentation from it.

In other engineering disciplines, most decisions are given a reference code and slapped on design diagrams and documents. This allows the tracing of the decision all the way back to the original discussion that led to those decisions being made. In software development, this has started to finally get through to some groups.

Additionally, I prefer to place the functional specification (I write them in OCL, given I have a bit of VDM in my history :) in the form of pre and post conditions at the top of the method/function. It would look something like:


/*
* --- ref: RQ/SH01/01 ---
* context AccountingServices::TryConnect( Host : Uri,
* Port : Integer ): boolean
* ------
* pre: registeredAddresses->contains( Host )
* post: ( result and AccountingServices.Host.Connected ) or
* not( result or AccountingServices.Host.Connected )
*/
/// <summary>
/// This method attempts to contact the host server and establishes a

/// connection if an address is one of the registered addresses.
/// <example>
/// ...
/// if ( AccountingServices.TryConnect( hostAddress, portNumber ) )
/// ... Do Something ...
/// else
/// ... Do Something else ...
/// ...
/// </example>
/// <param name="HostAddress">The host location to attempt the
/// connection to</param>
/// <param name="port">The port number to connect to</param>
/// </summary>

public bool TryConnect( Uri HostAddress, int port ){...}



Or you could just apply the reference and hope that a developer will read the documentation...

...ooh look over there! A rainbow, I need to catch it!! :o)

But before I go colour hunting, good quality comments are a good thing. Donald Knuth's Literate programming, despite his best efforts between the 1970's and 1990's, in its entirity, has been consigned to faded memory in modern day imperative paradigms. Though the principles it pushed live on in the new guise of documentation comments (such as those for JavaDoc, DelphiDoc, DOxygen and SandCastle).

The problem is using these tools under time pressure. Documenting code is often relagated to third class status, way behind getting code out of the door and unit testing. It is performed with the mentality that 'I will do it tomorrow'.

Sometimes it is up the QA members of the team to demand that this be done, especially when no formal design documents have been created and the whiteboard has been wiped clean...

...Oooooh!! It's getting away...

..Otherwise it is lost when that 'tribal memory' fades or joins another 'tribe' :o) As the best efforts of the developers in writing unit tests, using good method and variable names will never explain what decision was taken and why.

Some developers and companies regard good documenting comments as 'Gold Plating' and in doing so, will end up paying for the time of a comparatively highly paid contractor/consultant to repeatedly chase up the source of the decisions when the decision makers may have done an Elvis and left the building. If in the end the 'tribal leader' who mde the decision isn't based in the organisation any more, you are pretty much stuffed. So this highly paid consultant will trawl through tens or hundreds of thousands of lines of tests and code, not knowing if either accurately reflect the businesss process and if they are new, not even knowing what the business process is in any meaningful detail, for days or weeks at a time making zero progress on development or bug fixes before (s)he finds the source of the problem and a ten minute job later, it's fixed!! Well done! Waste Maker Corp, with your misinterpretation of lean principles, you have just wasted thousands (or tens thereof) of your own company's money because you didn't let a cheaper developer spend a couple of hours putting documenting comments in.

...crap, the rainbow's gone!! :-(

Not to worry, there will be other rainy days.