Agile Insider reality bytes…

15Feb/120

Top Tips – Advanced Acceptance Test Driven Development

Focus on Success

Over the course of my career I've had the pleasure of working with some great agile teams. I've also had some bitter disappointments working with great developers, testers and BAs who just don't get it...

Many of the teams that get it didn't actually use natural language to create executable acceptance tests, however they did have extensive suites of automated acceptance tests, usually written by the business analysts or developers but in a language and style that is not normal for the non agile developers I have encountered. So in an attempt to try to capture the difference I'm going to try to provide some useful tips and techniques to challenge those attempting to adopt acceptance test driven development within a corporate environment.

I will begin by recommending the various conference videos from GTAC. I'm not saying google are doing it perfect (I just don't know), but I am happy to believe they are probably doing lots of things right...

And most important, if we are going to go to the bother of creating executable acceptance tests, think carefully about who is accepting these. If the only person who will accept these (and I mean really accept, as in understand and even be happy to take ownership of it) is the developer, then use the most appropriate tool.

So the tips and techniques...

  1. Make sure the story is the right story... If you have a story that is purely technical, then it's possibly better to test these using developer tests, it's unlikely to be something the business "really" care about... If the story isn't for a paying customer but for an internal user, try to find out what benefit that internal user is going to provide for the customer and reword the story from the end user perspective.
  2. Don't clean up after tests... More importantly for acceptance testing is ensuring you know the state of the environment at the beginning of the test and that the test can run based on that state. Leaving the data created by the test can help immensely when issues are found. Given the amount and complexity of changes an acceptance test can inflict on an environment combined with the number of points at which an acceptance test can fail makes cleaning up extremely complex and error prone and does not provide the same level of ROI as unit tests. This has the added benefit of building a business case for better, more flexible environments and continuous delivery...
  3. Create unique contexts for each test... To prevent tests stepping on each other's toes if they are run in parallel, create a unique context for the test, this could be as simple as creating a user with a unique id for that test or might require creating a collection of unique items you plan to use (e.g. instruments in a trading app, pages in a cms, articles for a blog)
  4. Don't wait for something, make it happen... Where you come across a situation where you need to wait for something, prod the system to make it happen and use a spin loop so that in an environment where you can't prod the test still passes.
  5. Question everything, even the test framework... As you develop your acceptance tests, the supporting test framework and ultimately the application continually ask yourself what would happen if you replaced x with y. For a web based application, the questions you might ask could be what would happen if we wanted to make this available on an android device or iphone, does my acceptance test still hold true? Can my test framework support this easily without visiting all the fixtures? What if change the test framework I use?
  6. Use the english language to drive the domain model... Good acceptance tests usually make it explicit the domain model needed to support the testing, and more often than not this drives the actual domain model needed within the application.
  7. Use the real application code if at all possible... Rather than completely decouple your tests from the implementation, use the real implementation at the appropriate layer. This adds the benefit that changes to the implementation require no changes to the tests. To achieve this requires a suitably layered test framework to prevent these implementation changes rippling too far up resulting in broken tests. The best candidates for reuse are typically the domain models, data access components and service interfaces.
  8. Assume you are running everything on your own machine until you can't... Start with the assumption that everything you need is running on your local development machine, since ultimately the goal is you can actually run these tests locally to test the functionality works. Once you have a test running and passing locally, you know the functionality is working and are then in a better place to refactor the test to support different environments.
  9. Keep the tests isolated... Don't try to optimise tests by adding additional verifications or steps to existing stories. Create new tests. This might expose problems running the tests quickly but explore the other solutions to this rather than create huge tests that test too much. And imagine how the business will react when you say you are running 500 business tests and are getting 100% pass rate but can't test their new features because you don't have enough kit...
  10. Don't write the test at all... If the story doesn't have much value, or the the systems you are using are not in your control and are not test friendly then stop just short of automating it... Work out how you might automate it, the exercise will highlight the blockers and drive a better story and clearer acceptance criteria, but weight up the cost of writing, maintaining and executing the test against the value of the story and the true cost/likelihood should a defect occur in that story...

I'm sure a few of these will feel a little controversial or sit uncomfortably dependent on your experience. I'm also sure some appear on the face of it to conflict with others. For those who reach nirvana, you will end up with a suite of extremely robust acceptance tests (owned and fully understood by, the business), which a developer can run locally before committing code and which are then run again in a virtual production like cloud.

19Jan/12Off

Why Bother with Automated Acceptance Tests

Who's this for?

I'm about to write a few articles covering some advanced acceptance testing techniques. I don't plan to get into the nitty gritty technical details and instead want to discuss the why's... For some great material around acceptance testing I highly recommend looking at the Concordion techniques page and can't speak highly enough of Gojko Adzic and recommend you look at his blog and in particular the comments to his posts.

The question I want to ask is slightly more philosophical. Why are we really writing automated acceptance tests and who are they really for?

If you have a customer on-site, who writes the stories and tests (acceptance criteria) in a non implementation specific way then stop reading right now and send me the link to your blog, otherwise...

In an acceptance test driven environment, the acceptance tests help ensure you have solved the problem and developer tests help ensure you are solving the problem the right way. To validate you are solving the right problem we need to express the tests in a way which doesn't tie you to a particular implementation so we probably want to drive this more from a user experience and in particular the functionality we expose to the user as well as what the user can expect when using that functionality. So we are writing acceptance tests that check that the functionality we are making available to our customers is working correctly, without worrying how we will provide that functionality, but does that mean we are expecting our customer to "accept" those acceptance tests?

In agile teams you probably have a product owner and in an ideal world we would want the product owner to "own" these acceptance tests. More often than not, the product owner will happily own the story but will delegate owning the specifics (which sadly often includes testing) to a business analyst. Our goal is to get the product owner to own these tests, but with a business analyst in the way we are probably already at the stage where any tests will be implementation specific, since the business analyst is probably doing exactly that, working out how to solve the problem... In fact, business analysts probably don't want to own the tests either which leaves the team...

Let's reflect for a moment... We want the customer or product owner to own acceptance tests, but instead it usually ends up being the team that owns them, so let's explore what typically happens... The team search the web for acceptance testing techniques, they come across BDD and see there are a wealth of tools out there supporting BDD. They pick up a tool (cucumber, jbehave, etc) and all tests are now captured and represented in Pseudo english in the hope that the product owner or business analyst can start creating these tests themselves. I've yet to meet a product owner or business analysts (or indeed a developer) who uses this style of language,

a product owner walks in to a bar and says to the barman

"Given I am thirsty
and I am standing at a bar
and there is a beer mat in front of me,
When I ask the bar man for a pint of his best bitter
Then the barman will pour a pint of his best bitter
and place it on the beer mat in front of me".

Just a little bit verbose (not to mention slightly implementation specific) for expressing ordering a pint of best bitter. So my point is BDD is a technique, it is invaluable for exploring the problem domain and capturing scenarios and examples to help understand a problem, however they are not a specification in and of themselves. Using a tool too early to automate these ties you into this form of unnatural expression and eliminates a choice of how to engage with the customer later.

As a team, using the technique in discussions but then using a tool or framework (e.g. xUnit) more suited to the real owner of the executable part (developers) means you can leave the choice of customer facing tool to a more appropriate moment when they actually do want to engage and also benefit from an environment and language the developers are most comfortable with...  I've written previously that even working out what you plan to test or demonstrate before working out how to implement it can add immense value as a thought exercise.

Toxic Waste

There is also another scenario which is by and far the most dangerous... Having browsed the web, we want a cross functional team so we embed a tester into the team to perform this function. The tester works closely with the business analyst and creates/owns functional tests. Most testers are new to development and don't have the skills or experience of the developers to be writing code, and worse, we are trusting these inexperienced developers with writing the most important code in the system, the tests that validate that what we are doing is correct... Inevitably we will end up with an enormous suite of functional tests that are very "script" based, not easy to understand and which add little if any value to the day to day activities of the team.

So to recap, we want to write acceptance tests to validate we are building the right thing (and that once built (or is reimplemented) it continues to work), and we want the customer (or product owner) to "own" them. If any of these are not true in your organisation then seriously ask yourself why are you doing what you are doing and put the customer's hat on and ask would you (as a customer with limited technical knowledge) ever "accept" what is being done on your "behalf"...

12Apr/110

Concordion Plus

Concordion PlusI've written in the past about Using English to write Acceptance Tests and the tool I choose/advocate is without doubt Concordion.  Customers love seeing their stories come alive, but I've found developers can sometimes struggle to differentiate these from JUnit tests, particularly since JUnit is used to provide the execution mechanism.

I've also found that in many of the situations/organisations where I have introduced Concordion, a single story has required several tests and although the Concordion guide does present some excellent techniques to deal with this, teams new to writing acceptance tests will be uncomfortable capturing stories in this format and customers might not be happy that all their acceptance criteria are being met.  I am therefore pleased to be releasing Concordion+ to the wild.  At the moment it is a simple extension to Concordion which allows developers to create individual scenarios (or test cases) within a single story and also to ignore the scenarios they are actively working on.  In addition, a new JUnit runner captures the state of each of these scenarios independently and reports them in the common IDEs while also allowing developers to use the JUnit @Before and @After annotations.  This should simplify adoption by developers since they now have a JUnit lifecycle they understand.

I have to send a huge thank you to Nigel Charman for the concordion-extensions project which helped immensely with my development.  And of course I can't dare not mention the excellent work by David Peterson on Concordion itself and particularly the facility to write extensions 😉

I hope you enjoy using it as much as I enjoyed creating it...

22Apr/100

Enterprise Agile – Evolutionary Standards

Low Tech Evolutionary Standards

At the risk of being lambasted by the agile community I will use the words enterprise and agile in the same sentence 😉

This article largely follows on from some previous entries and in particular my entry on user centred test driven development.

It is often a complaint that large organisations trundle along painfully and slowly.  Work can't start without following some process or other until you have sign-off.  Part of this sign-off will probably involve agreement to follow certain standards and guidelines, but if these standards don't yet exist how can we start???

To challenge this and present an alternative approach, why not make the "standards" part of the delivery itself.  Make it clear up front that rather than wait for the standards to be released (which would be the normal mode of attack in large organisations) you will actively work with whichever standard's body exists in the organisation to evolve just enough standards to support the actual work you are doing as you work through the backlog.

To make this work, COURAGE is imperative...  Someone has to have the courage to put a stake in the ground early, recognising there is a small risk this may change.  Developers should embed the standards into their automated testing as early as possible, this means that when and if a standard does change, there are tests in place which will assist developers in ensuring that all work to date is easily brought up to date...

The results of this is a design language that everyone can understand, when someone says they are writing a test which is looking for the jobs tag in the currently featured news article, everyone should know what that refers to in the wireframes, and also know how this will be identified and marked up in the implementation.  This allows tests to be written before any code and even for the final "Look And Feel" to progress alongside development.

Of course, you're always free to continue in the traditional model and wait for three months until the standards body within the organisation produces a 300 page guidelines document before even starting that killer new feature that will storm the market...  Or make totally random guesses, which are much more likely to be wrong, and be safe in the knowledge you have the traditional saviour of projects - Hope and Prayer!!!

20Apr/100

Keeping It Simple – Regression vs Acceptance Testing

Another emergn coach asked me the other day how I distinguished between an acceptance test and regression tests.  For me there is a very simple rule...

  • If I write the test before I write any code, it's an acceptance test.
  • If I write the test after I've written the code, it's a regression test.
  • If I write code to make an acceptance test pass, it is now a regression test.

Keeping it as simple as this keeps you out of trouble, I've seen so many people try to retro-fit acceptance tests after they've written code only to write a test which is based on what they've written rather than what they should have written.  It's a subtle but important point that writing a test for stuff you've written (which might be wrong since you haven't got an acceptance test) means you are potentially writing a test that the system always does the wrong thing...