My current contract ends in a few more days so I'm taking the opportunity to dust off my worn copy of Rework by 37 signals. I have to make a long overdue thanks to Craig Davidson, an outstanding agile developer I encountered in a previous engagement.
It's not a traditional agile book by any means, but the facts that are presented within the book resonate strongly with my agile values and I find it has helped me immensely to keep grounding myself between contracts. I am now constantly surprised just how many paper-cuts I have personally accepted at each engagement and am equally surprised at my own personal level of intolerance now. I'm actually thinking of requesting a discount from the authors since I now use this book as a gift I give almost routinely...
I challenge anyone not to find the book invaluable at challenging their own current view of the world.
So, once more, and I must apologise profusely for the tardiness, thank you so much Craig...
I've always found it a challenge when new teams are adopting scrum but have simply renamed their list of requirements as a product backlog. Scrum provides a nice facade which shows a steady progress of churning through these requirements, but it makes it extremely difficult to measure the tangible business value. This is particularly the case where we have a scrumbut model, especially when done doesn't include production/release.
The progression from a list of requirements to user stories with acceptance criteria is usually the first step I recommend, but this is fraught with danger. Requirements typically have inherent dependencies that don't translate well into stories and requirements are also usually solutions to problems rather than the problems themselves. It is only by uncovering the underlying problems that we can start forgetting about the "requirements" and start providing tangible business value by solving business problems.
The first stab at cutting user stories usually results in very large stories with very vague, subjective acceptance criteria, if indeed there are any acceptance criteria. As teams work on these large stories, the tasks they produce are also probably too big and vague and simply sit forever in the in progress column. This is usually due to blindly trusting and following the scrum process. At this stage I usually challenge teams to stop tracking tasks and instead focus on delivering the stories. This is extremely uncomfortable at first since teams will be struggling to deliver big stories. However, it only takes a sprint or two before the team start focussing on the stories and feel relieved that they don't get bogged down in 3 or 4 hour planning meetings to produce extremely detailed task breakdowns. The tasks are still extremely important to capture and update, but this is more of a real-time activity and no-one gets penalised for adding new tasks, or removing tasks that are not needed any more...
This hybrid scrum/lean model provides much greater opportunity to start introducing new technical practises (e.g. test first, automated acceptance testing, etc) since stories are broken down at the last responsible moment and some stories are natural candidates (comfortable technologies, clearly defined, etc) for trying new things.
The next challenge I usually face is getting stories to a size that is acceptable for the team and the PO. Applying the INVEST model works quite well here as does parking open questions through agreeing to raise future stories as required to limit the scope of the story in question to something that is estimatable. At this point stories can become quite small (which is great IMHO) with perhaps only 1 or 2 acceptance criteria. This for me is the sweet spot. It means the story will probably fit in the 2 hr to 2 day window, it is well understood, it is easy to estimate, it is easy to test and a host of other great things... However, it will also probably invalidate any existing (but also highly dubious) velocity metrics since the team will need to rebaseline...
I've witnessed scrum applied extremely well, when supported with some additional technical practises and good story discovery/acceptance criteria/backlog management, but more often than not in my experience scrum is applied as the smoke and mirrors over the requirements list to demonstrate progress and it's only when you hit the last sprint you realise that you can't integrate/release/deliver despite having completed 90% of the requirements before entering the sprint with only a couple of requirements to go (e.g. Oracle Auditing and Secure Messaging)...
This article largely follows on from some previous entries and in particular my entry on user centred test driven development.
It is often a complaint that large organisations trundle along painfully and slowly. Work can't start without following some process or other until you have sign-off. Part of this sign-off will probably involve agreement to follow certain standards and guidelines, but if these standards don't yet exist how can we start???
To challenge this and present an alternative approach, why not make the "standards" part of the delivery itself. Make it clear up front that rather than wait for the standards to be released (which would be the normal mode of attack in large organisations) you will actively work with whichever standard's body exists in the organisation to evolve just enough standards to support the actual work you are doing as you work through the backlog.
To make this work, COURAGE is imperative... Someone has to have the courage to put a stake in the ground early, recognising there is a small risk this may change. Developers should embed the standards into their automated testing as early as possible, this means that when and if a standard does change, there are tests in place which will assist developers in ensuring that all work to date is easily brought up to date...
The results of this is a design language that everyone can understand, when someone says they are writing a test which is looking for the jobs tag in the currently featured news article, everyone should know what that refers to in the wireframes, and also know how this will be identified and marked up in the implementation. This allows tests to be written before any code and even for the final "Look And Feel" to progress alongside development.
Of course, you're always free to continue in the traditional model and wait for three months until the standards body within the organisation produces a 300 page guidelines document before even starting that killer new feature that will storm the market... Or make totally random guesses, which are much more likely to be wrong, and be safe in the knowledge you have the traditional saviour of projects - Hope and Prayer!!!
Another emergn coach asked me the other day how I distinguished between an acceptance test and regression tests. For me there is a very simple rule...
- If I write the test before I write any code, it's an acceptance test.
- If I write the test after I've written the code, it's a regression test.
- If I write code to make an acceptance test pass, it is now a regression test.
Keeping it as simple as this keeps you out of trouble, I've seen so many people try to retro-fit acceptance tests after they've written code only to write a test which is based on what they've written rather than what they should have written. It's a subtle but important point that writing a test for stuff you've written (which might be wrong since you haven't got an acceptance test) means you are potentially writing a test that the system always does the wrong thing...
Thanks to Ward Cunningham, we now have a wonderful metaphor "Technical Debt" which explains the common problem of skipping a little bit of design or missing out that little bit of refactoring to meet a deadline. Whenever we cut corners there is a very good chance we are taking on more and more Technical Debt.
But is there a flip side to this? I think there is and the term I would use is Functional Debt. This is tied firmly in the YAGNI camp and relates to functionality that is developed without a need (or worse still a test). Applying too much design, or developing generic frameworks with no business reason to do so inevitably leads to a solution which is over-engineered. Of course, over-engineering as a term has been around for a long time, but I prefer the term Functional Debt, because this ties it back to money in a similar way to Technical Debt.
Debt is a term that evokes emotion and is easy for people to identify with and it is this capacity of the term to clarify the issue with a certain practise. Over-engineering as a term doesn't evoke the same response and certainly doesn't suggest a loss of money in the same way that Debt does.
There are of course direct, easily measurable costs involved in creating unused functionality and that is the development costs, however, there are many more subtle costs that are easy to overlook. There is the missed opportunity costs associated with not doing the right thing. There is the project overhead costs in maintaining code that is not used. There is the project overhead costs in increased complexity and time for the standard day to day activities of testing and refactoring. There is the increased maintenance costs since it is now harder to understand the code for support personnel...
One of the biggest causes for Functional Debt I have seen is a lack of customer (business) involvement or direction. Left to their own devices, IS departments naturally build overly- complex solutions to simple problems. Without a business value attached to a piece of functionality (actually to a problem that is solved by a piece of functionality) it is only too easy for the IS department to burn money like there's no tomorrow.
The more I reflect on this, the more I am left feeling that Agile is actually a mutation in software development and this is one of the major reasons why Agile is so difficult to master. I'm wondering whether agile would ever naturally evolve in a small team left to their own devices and I simply can't envisage it. Of course, this will now remain an academic hypothesis since Agile has now stamped it's influence indelibly on the DNA of software development.
Software development as a craft suffers from unnecessary complexity and I fear that Agile, which initially thrived in simplifying (or removing) this complexity, is now becoming itself encompassed in unnecessary complexity. Despite agreeing with much of the sentiment of the software craftmanship manifesto I just can't bring myself to sign up to it yet. I struggle to see the benefit of more fuzzy aspirational statements and would prefer to see a clarity of vision and roadmap to achieving it.
Fundamentally Agile is fantastic, but sadly the passionate discussions, raging debates and conflicting methodologies don't clarify anything. If Agile doesn't clearly define itself soon I fear yet another mutation may take centre stage and Agile will end up being just a blip (albeit a very significant blip, where we gained sight) in the evolution of software development.
Test Driven is a loaded term and means different things to different people. I much prefer the term Test First which clearly states that the test comes before the implementation. However, for me, the value is not necessarily in creating an executable test, but in the thought processes that Test First brings out.
One of my colleagues at emergn is constantly reminding me that when presented with a solution you need to ask yourself what is the problem. This is what lead me to question TDD and it's variants. If you search the web for Test Driven Development, you'll uncover a wealth of information from many of the authors in my BlogRoll, as well as many variants on a theme. I think the wikipedia entry is a particularly good summary of Test Driven as it is currently understood by the community but for the real meat and bones you need to look at the articles from the thought leaders behind the practises.
However, when I look at this wealth of information, I'm now faced with the question... OK, these are all solutions, but what is the fundamental problem they are solving? Is it
- Code quality/design is poor
- Code is overly complex/difficult to test
- Unused functionality within the code
- Too many bugs
It is only when we understand the underlying problem fully that we can then evaluate the applicability/suitability of a particular approach. Indeed, it is this lack of a clearly defined problem which makes it impossible to determine which approach is best, since we can't define any tests up front... This is actually a little bit of a paradox, but highlights for me one of the most important points about TDD. TDD is a solution to too many problems and in certain cases is not a very good solution.
Testing should be about proving functionality, but unfortunately we too often see TDD trying to address non testing issues like design and code quality. Of course, code needs to be testable, but it is not the responsibility of the Testing to enforce this, it is the responsibility of the design, but this is (unfortunately IMHO) the missing piece in most methodologies... Indeed, the approach of writing the "Simplest Possible Thing" to pass a test is possibly my biggest bug-bear with TDD. This approach often means you can pass tests with no functionality other than hard coded return values, now that has to be the biggest process smell ever.
I'm going to stop using TDD, or Test Driven now and use instead the term I prefer which is Test First. For me, this means that before I do anything, I will determine ahead of time how I would test it. The immediate benefit (a.k.a value) I get from determining how to test something is I'm also starting to think about (dare I suggest design) things like apis/design/interaction/responsibilities. This is not the same as BDUF (big design up front), instead it is just enough thought at just the right moment to (hopefully) prevent a disaster. I can then apply some cost/benefit analysis with a pinch of risk analysis to assign a value to actually writing all the tests that TDD would have forced you to write, or just a small percentage to cover the important or more complex functionality.
Of course, in many cases I will actually create executable tests for many of those uncovered during this thought exercise, but will I go through the whole Red/Green/Refactor cycle, possibly not. For me the Red/Green/Refactor is like micro context switching. I prefer a slightly longer period of focus and therefore I may write quite a few tests at the same time before applying that context switch to go into coding mode. This of course is my personal preference and undoubtedly would be scorned upon by the dogmatic TDD zealots, but if this is what makes me more productive, and without a baseline problem statement I challenge them to prove that this is not the best way to do your testing.
Test First allows me to
- Understand the problem I'm trying to solve
- Think about how I will solve it (just enough design)
- Uncover any unknowns or risks hidden in the initial problem statement
- Produce high quality code which has just enough tests
Many of the Test Driven approaches do indeed address many of the initial problems stated earlier, but are they the only solution to these problems? Indeed, are these problems actually just symptoms of even deeper problems? If the problem is simply that your code is very tightly coupled and difficult to test then the best ROI will probably come not from TDD but from up-skilling your development team on fundamental design principles (unskilled developers was the real problem in the first place and TDD can't solve that).
- Wikipedia entry for Test Driven Development: http://en.wikipedia.org/wiki/Test-driven_development
- Introducing BDD by Dan North: http://dannorth.net/introducing-bdd (highly recommended reading)
- C2 wiki entry for Test Driven: http://c2.com/cgi/wiki?TestDriven
Stop right there...
If you truly can't change it, then you're truly not agile either.
To be agile doesn't mean you must follow any particular methodology, to be truly agile you must be actively seeking to constantly improve every aspect of what you do. If this involves trying out some lean principles to eliminate waste, or TDD to improve the quality of the tests, it doesn't matter.
So, the next time you hear yourself saying I'm agile, but... You've just identified the next problem to solve in your own methodology and your also just a little bit more agile than you already were...
I' ve got several things bubbling over at the moment and because I never bothered capturing them as stories and placing them in a backlog I'm really struggling to make headway with any of them. Some are personal experimentation, others are professional and finally there's a few projects which may end up as open source.
So what's stopping me making this backlog? It's the lack of a single coherent customer to do the prioritisation of course. I could certainly pull out all the things I want to do into a set of stories, but would it be right for me to also play the customer role? I certainly don't want to ask my work to do the prioritisation, since I know which side of the fence they'll lean on...
I think my personal challenge is quite similar to many projects I have witnessed where there has not been a single customer. It becomes extremely difficult to prioritise stories and deliver a single useful end to end feature and instead we deliver lots of little bits and pieces. Of course, switching contexts in itself is extremely expensive in terms of productivity, but unless there is a single customer, with clear goals and a solid backlog of well expressed stories then switching contexts is inevitable.
What do I plan to do? Well first and foremost I'm going to take the baby-step of at least creating my backlog of stories... By definition, these are my stories, so in reality I am the customer, I guess I'm just like all other customers, I'm not a very good one and I like to keep changing my mind - oh well...
The internet has now pervaded my life to such an extent I'm going to soon be looking to get a direct connection to my brain. But before I do, I'm in desperate need for a way to organise the constant, chaotic stream of information that is radiating from it...
Here's my list of basic requirements.
I want to be able to
- view all messages in a single application, regardless of source (rss, google reader, twitter, etc).
- post messages to any of the applications I use (twitter, blog, facebook, etc)
- Should also be able to post to multiple applications at once
- search and filter the content
- filtered messages, should still be available for future search/display
- use multiple platforms (phone, web, laptop, desktop) with them all kept in sync
- specify multiple ways of notifying me if anything relevant appears
- particularly important for mobile platforms
- switch between different modes (professional, private, meeting)
But MOST IMPORTANTLY
I DO NOT WANT TO:
- create YABA (yet another b****y account) to use yet another free online tool
- wait for hours while it downloads everything before I can view a single message
If anyone knows of such a tool, I'll be extremely happy to hear about it...