Top Tips – Advanced Acceptance Test Driven Development
Over the course of my career I've had the pleasure of working with some great agile teams. I've also had some bitter disappointments working with great developers, testers and BAs who just don't get it...
Many of the teams that get it didn't actually use natural language to create executable acceptance tests, however they did have extensive suites of automated acceptance tests, usually written by the business analysts or developers but in a language and style that is not normal for the non agile developers I have encountered. So in an attempt to try to capture the difference I'm going to try to provide some useful tips and techniques to challenge those attempting to adopt acceptance test driven development within a corporate environment.
I will begin by recommending the various conference videos from GTAC. I'm not saying google are doing it perfect (I just don't know), but I am happy to believe they are probably doing lots of things right...
And most important, if we are going to go to the bother of creating executable acceptance tests, think carefully about who is accepting these. If the only person who will accept these (and I mean really accept, as in understand and even be happy to take ownership of it) is the developer, then use the most appropriate tool.
So the tips and techniques...
- Make sure the story is the right story... If you have a story that is purely technical, then it's possibly better to test these using developer tests, it's unlikely to be something the business "really" care about... If the story isn't for a paying customer but for an internal user, try to find out what benefit that internal user is going to provide for the customer and reword the story from the end user perspective.
- Don't clean up after tests... More importantly for acceptance testing is ensuring you know the state of the environment at the beginning of the test and that the test can run based on that state. Leaving the data created by the test can help immensely when issues are found. Given the amount and complexity of changes an acceptance test can inflict on an environment combined with the number of points at which an acceptance test can fail makes cleaning up extremely complex and error prone and does not provide the same level of ROI as unit tests. This has the added benefit of building a business case for better, more flexible environments and continuous delivery...
- Create unique contexts for each test... To prevent tests stepping on each other's toes if they are run in parallel, create a unique context for the test, this could be as simple as creating a user with a unique id for that test or might require creating a collection of unique items you plan to use (e.g. instruments in a trading app, pages in a cms, articles for a blog)
- Don't wait for something, make it happen... Where you come across a situation where you need to wait for something, prod the system to make it happen and use a spin loop so that in an environment where you can't prod the test still passes.
- Question everything, even the test framework... As you develop your acceptance tests, the supporting test framework and ultimately the application continually ask yourself what would happen if you replaced x with y. For a web based application, the questions you might ask could be what would happen if we wanted to make this available on an android device or iphone, does my acceptance test still hold true? Can my test framework support this easily without visiting all the fixtures? What if change the test framework I use?
- Use the english language to drive the domain model... Good acceptance tests usually make it explicit the domain model needed to support the testing, and more often than not this drives the actual domain model needed within the application.
- Use the real application code if at all possible... Rather than completely decouple your tests from the implementation, use the real implementation at the appropriate layer. This adds the benefit that changes to the implementation require no changes to the tests. To achieve this requires a suitably layered test framework to prevent these implementation changes rippling too far up resulting in broken tests. The best candidates for reuse are typically the domain models, data access components and service interfaces.
- Assume you are running everything on your own machine until you can't... Start with the assumption that everything you need is running on your local development machine, since ultimately the goal is you can actually run these tests locally to test the functionality works. Once you have a test running and passing locally, you know the functionality is working and are then in a better place to refactor the test to support different environments.
- Keep the tests isolated... Don't try to optimise tests by adding additional verifications or steps to existing stories. Create new tests. This might expose problems running the tests quickly but explore the other solutions to this rather than create huge tests that test too much. And imagine how the business will react when you say you are running 500 business tests and are getting 100% pass rate but can't test their new features because you don't have enough kit...
- Don't write the test at all... If the story doesn't have much value, or the the systems you are using are not in your control and are not test friendly then stop just short of automating it... Work out how you might automate it, the exercise will highlight the blockers and drive a better story and clearer acceptance criteria, but weight up the cost of writing, maintaining and executing the test against the value of the story and the true cost/likelihood should a defect occur in that story...
I'm sure a few of these will feel a little controversial or sit uncomfortably dependent on your experience. I'm also sure some appear on the face of it to conflict with others. For those who reach nirvana, you will end up with a suite of extremely robust acceptance tests (owned and fully understood by, the business), which a developer can run locally before committing code and which are then run again in a virtual production like cloud.
Keeping It Simple – Regression vs Acceptance Testing
Another emergn coach asked me the other day how I distinguished between an acceptance test and regression tests. For me there is a very simple rule...
- If I write the test before I write any code, it's an acceptance test.
- If I write the test after I've written the code, it's a regression test.
- If I write code to make an acceptance test pass, it is now a regression test.
Keeping it as simple as this keeps you out of trouble, I've seen so many people try to retro-fit acceptance tests after they've written code only to write a test which is based on what they've written rather than what they should have written. It's a subtle but important point that writing a test for stuff you've written (which might be wrong since you haven't got an acceptance test) means you are potentially writing a test that the system always does the wrong thing...
“Natural Language” Automated Acceptance Testing
I read with extreme interest James Shore's blog about FIT but was dismayed that he devalues automated acceptance testing. To claim that FIT is a "natural language" is wrong, it is a developer language and this is possibly why customers don't get involved. Concordion on the other hand is natural language and I think plays much better in this arena. In addition it is much more developer friendly.
I've written previously that for me the value of test first is the thought processes surrounding it, however, where applicable converting these into automated tests, and in particular automated acceptance tests is a huge win. I would love having a customer "own" the tests, but when this isn't possible (almost always) I will try to put my "customer" hat on and think like the customer and express what I'm about to do in their language (which will be English, not FITnese, or selenese, or rspec). If the customer is happy with my specification, I can then use this directly as my test.
So for me, the lack of customer isn't the problem, but I agree with James on one point, there is a problem...
It's the people... The majority of developers I've encountered can't think like the "Customer" and instead thrive on complexity. They can't express the problem or solution correctly and write tests that become implementation specific. This means they have written a test for a specific solution, where actually there could be a multitude of solutions, even in the same technology. When they then 'implement' this solution and the customer doesn't like it, the test can't be reused and needs to be 'reworked' (I'm avoiding refactored, since the test was actually wrong, and therefore it should be fixed, not refactored). This is the problem, the test may be rewritten many times at which point the customer will be thinking, this is now the n'th time I've asked for this exact same feature and I've seen five different versions of a test for the same thing, none of which are producing what I'm asking for. If I was that customer would I want to own these "tests" which seem to be so difficult to change and can produce such a burden to tweak the implementation.
So for me, if I don't know what I'm doing, I won't do it and will ask for help from someone who does know what they're doing. I would encourage all developers to have the courage to admit when they are out of their depth with a practise and seek advice rather than struggle on developing the wrong thing which ultimately ends up having little value.
I forever find myself coming back to the five values, and when I measure FIT against simplicity, communication and feedback it would come in at "Good, could do better"...