Agile Insider reality bytes…


Dusting off Rework

My current contract ends in a few more days so I'm taking the opportunity to dust off my worn copy of Rework by 37 signals.  I have to make a long overdue thanks to Craig Davidson, an outstanding agile developer I encountered in a previous engagement.

It's not a traditional agile book by any means, but the facts that are presented within the book resonate strongly with my agile values and I find it has helped me immensely to keep grounding myself between contracts.  I am now constantly surprised just how many paper-cuts I have personally accepted at each engagement and am equally surprised at my own personal level of intolerance now.  I'm actually thinking of requesting a discount from the authors since I now use this book as a gift I give almost routinely...

I challenge anyone not to find the book invaluable at challenging their own current view of the world.

So, once more, and I must apologise profusely for the tardiness, thank you so much Craig...


My Stories Are Bigger Than Your Story

Big Stories

Big Stories...

I've always found it a challenge when new teams are adopting scrum but have simply renamed their list of requirements as a product backlog.  Scrum provides a nice facade which shows a steady progress of churning through these requirements, but it makes it extremely difficult to measure the tangible business value. This is particularly the case where we have a scrumbut model, especially when done doesn't include production/release.

The progression from a list of requirements to user stories with acceptance criteria is usually the first step I recommend, but this is fraught with danger.  Requirements typically have inherent dependencies that don't translate well into stories and requirements are also usually solutions to problems rather than the problems themselves.  It is only by uncovering the underlying problems that we can start forgetting about the "requirements" and start providing tangible business value by solving business problems.

The first stab at cutting user stories usually results in very large stories with very vague, subjective acceptance criteria, if indeed there are any acceptance criteria.  As teams work on these large stories, the tasks they produce are also probably too big and vague and simply sit forever in the in progress column.  This is usually due to blindly trusting and following the scrum process.  At this stage I usually challenge teams to stop tracking tasks and instead focus on delivering the stories.  This is extremely uncomfortable at first since teams will be struggling to deliver big stories.  However, it only takes a sprint or two before the team start focussing on the stories and feel relieved that they don't get bogged down in 3 or 4 hour planning meetings to produce extremely detailed task breakdowns.  The tasks are still extremely important to capture and update, but this is more of a real-time activity and no-one gets penalised for adding new tasks, or removing tasks that are not needed any more...

This hybrid scrum/lean model provides much greater opportunity to start introducing new technical practises (e.g. test first, automated acceptance testing, etc) since stories are broken down at the last responsible moment and some stories are natural candidates (comfortable technologies, clearly defined, etc) for trying new things.

The next challenge I usually face is getting stories to a size that is acceptable for the team and the PO.  Applying the INVEST model works quite well here as does parking open questions through agreeing to raise future stories as required to limit the scope of the story in question to something that is estimatable.  At this point stories can become quite small (which is great IMHO) with perhaps only 1 or 2 acceptance criteria.  This for me is the sweet spot.  It means the story will probably fit in the 2 hr to 2 day window, it is well understood, it is easy to estimate, it is easy to test and a host of other great things...  However, it will also probably invalidate any existing (but also highly dubious) velocity metrics since the team will need to rebaseline...



I've witnessed scrum applied extremely well, when supported with some additional technical practises and good story discovery/acceptance criteria/backlog management, but more often than not in my experience scrum is applied as the smoke and mirrors over the requirements list to demonstrate progress and it's only when you hit the last sprint you realise that you can't integrate/release/deliver despite having completed 90% of the requirements before entering the sprint with only a couple of requirements to go (e.g. Oracle Auditing and Secure Messaging)...


“Natural Language” Automated Acceptance Testing

I Don't Understand

Do you speak FIT?

I read with extreme interest James Shore's blog about FIT but was dismayed that he devalues automated acceptance testing.  To claim that FIT is a "natural language" is wrong, it is a developer language and this is possibly why customers don't get involved.  Concordion on the other hand is natural language and I think plays much better in this arena.  In addition it is much more developer friendly.

I've written previously that for me the value of test first is the thought processes surrounding it, however,  where applicable converting these into automated tests, and in particular automated acceptance tests is a huge win.  I would love having a customer "own" the tests, but when this isn't possible (almost always) I will try to put my "customer" hat on and think like the customer and express what I'm about to do in their language (which will be English, not FITnese, or selenese, or rspec).  If the customer is happy with my specification, I can then use this directly as my test.

So for me, the lack of customer isn't the problem, but I agree with James on one point, there is a problem...

It's the people...  The majority of developers I've encountered can't think like the "Customer" and instead thrive on complexity.  They can't express the problem or solution correctly and write tests that become implementation specific.  This means they have written a test for a specific solution, where actually there could be a multitude of solutions, even in the same technology.  When they then 'implement' this solution and the customer doesn't like it, the test can't be reused and needs to be 'reworked' (I'm avoiding refactored, since the test was actually wrong, and therefore it should be fixed, not refactored).  This is the problem, the test may be rewritten many times at which point the customer will be thinking, this is now the n'th time I've asked for this exact same feature and I've seen five different versions of a test for the same thing, none of which are producing what I'm asking for.  If I was that customer would I want to own these "tests" which seem to be so difficult to change and can produce such a burden to tweak the implementation.

So for me, if I don't know what I'm doing, I won't do it and will ask for help from someone who does know what they're doing.  I would encourage all developers to have the courage to admit when they are out of their depth with a practise and seek advice rather than struggle on developing the wrong thing which ultimately ends up having little value.

I forever find myself coming back to the five values, and when I measure FIT against simplicity, communication and feedback it would come in at "Good, could do better"...


Limitations of “Grow Your Own” Agile

"Grow Your Own"

"Grow Your Own"

Over the course of my career I have worked at several organisations and have always tried to improve the internal processes using agile techniques and principles. Despite being a valued employee (I hope) at each of the companies I have worked at, the amount of success I achieved in agile adoption always reached some internal limits. It was only when I joined emergn that I was able to rationalise this.

It is inevitable that as an employee of a company you will have something to do as part of your day job. This will always be your primary concern and there will inevitably be certain processes you must follow in order to perform your function. Changing this process from the inside will usually involve challenging the process (rocking the boat) using rational arguments and demonstrable alternatives. This is certainly achievable, but does take rather a long time to introduce even simple improvements. Organistations, particularly large organisations are not content with local optimisation and nearly always want to ensure that any benefits from a single improvement become the standard for the organisation as a whole. This usually means that the number of interested parties is artificially (and politically) quite significant and therefore the amount of resistance to change is high.

As an external coach the mandate is entirely different.

First and foremost, your primary function is to instigate change.

This will mean the amount of resistance is significantly less.

Secondly, you will not be tied to existing processes.

This means you can implement changes and improvements much faster.

Thirdly, as an outsider you are automatically assumed to be an expert.

This will mean that you will not need to engage in the same level of rational argument or discussion as an internal employee.

Lastly, as an outsider you bring some diversity and objectivity to the environment.

You will not be unconciously constrained by any existing processes or internal preconceptions about the art of the possible.

As an external coach now, I am actually extremely surprised with just how much compromise I had been willing to unconciously accept as an employee. Every small improvement I would have liked to make became a battle and unfortunately I lost many of these battles not through a lack of rational argument but through a lack of energy or time to continue to fight. When push came to shove I had to get on with my day job and ensure I lived to fight another day. Reflecting now, I'm not surprised that the more successful some of the improvements were the bigger the political entourage became and the more difficult it became to make the next improvement. Battles had to be chosen carefully not necessarily for the potential benefit but often based on the people who had expressed an interest.

I'm aware (and quite proud) of the changes I've made in each of the organisations that I've worked at but am left reflecting whether the effort was worth it. I think the barriers to continual improvement are probably a major factor when I decided whether I wished to remain at a given company and I can now see that effecting change from the inside is simply not effective. It will take at least twice as long to be at most half as effective as an external coach.


Functional Debt

Thanks to Ward Cunningham, we now have a wonderful metaphor "Technical Debt" which explains the common problem of skipping a little bit of design or missing out that little bit of refactoring to meet a deadline.  Whenever we cut corners there is a very good chance we are taking on more and more Technical Debt.

Money to Burn? Invest in Functional Debt

Money to Burn? Invest in Functional Debt

But is there a flip side to this?  I think there is and the term I would use is Functional Debt.  This is tied firmly in the YAGNI camp and relates to functionality that is developed without a need (or worse still a test).  Applying too much design, or developing generic frameworks with no business reason to do so inevitably leads to a solution which is over-engineered.  Of course, over-engineering as a term has been around for a long time, but I prefer the term Functional Debt, because this ties it back to money in a similar way to Technical Debt.

Debt is a term that evokes emotion and is easy for people to identify with and it is this capacity of the term to clarify the issue with a certain practise.  Over-engineering as a term doesn't evoke the same response and certainly doesn't suggest a loss of money in the same way that Debt does.

There are of course direct, easily measurable costs involved in creating unused functionality and that is the development costs, however, there are many more subtle costs that are easy to overlook.  There is the missed opportunity costs associated with not doing the right thing.  There is the project overhead costs in maintaining code that is not used.  There is the project overhead costs in increased complexity and time for the standard day to day activities of testing and refactoring.  There is the increased maintenance costs since it is now harder to understand the code for support personnel...

One of the biggest causes for Functional Debt I have seen is a lack of customer (business) involvement or direction.  Left to their own devices, IS departments naturally build overly- complex solutions to simple problems.  Without a business value attached to a piece of functionality (actually to a problem that is solved by a piece of functionality) it is only too easy for the IS department to burn money like there's no tomorrow.