Why Bother with Automated Acceptance Tests
I'm about to write a few articles covering some advanced acceptance testing techniques. I don't plan to get into the nitty gritty technical details and instead want to discuss the why's... For some great material around acceptance testing I highly recommend looking at the Concordion techniques page and can't speak highly enough of Gojko Adzic and recommend you look at his blog and in particular the comments to his posts.
The question I want to ask is slightly more philosophical. Why are we really writing automated acceptance tests and who are they really for?
If you have a customer on-site, who writes the stories and tests (acceptance criteria) in a non implementation specific way then stop reading right now and send me the link to your blog, otherwise...
In an acceptance test driven environment, the acceptance tests help ensure you have solved the problem and developer tests help ensure you are solving the problem the right way. To validate you are solving the right problem we need to express the tests in a way which doesn't tie you to a particular implementation so we probably want to drive this more from a user experience and in particular the functionality we expose to the user as well as what the user can expect when using that functionality. So we are writing acceptance tests that check that the functionality we are making available to our customers is working correctly, without worrying how we will provide that functionality, but does that mean we are expecting our customer to "accept" those acceptance tests?
In agile teams you probably have a product owner and in an ideal world we would want the product owner to "own" these acceptance tests. More often than not, the product owner will happily own the story but will delegate owning the specifics (which sadly often includes testing) to a business analyst. Our goal is to get the product owner to own these tests, but with a business analyst in the way we are probably already at the stage where any tests will be implementation specific, since the business analyst is probably doing exactly that, working out how to solve the problem... In fact, business analysts probably don't want to own the tests either which leaves the team...
Let's reflect for a moment... We want the customer or product owner to own acceptance tests, but instead it usually ends up being the team that owns them, so let's explore what typically happens... The team search the web for acceptance testing techniques, they come across BDD and see there are a wealth of tools out there supporting BDD. They pick up a tool (cucumber, jbehave, etc) and all tests are now captured and represented in Pseudo english in the hope that the product owner or business analyst can start creating these tests themselves. I've yet to meet a product owner or business analysts (or indeed a developer) who uses this style of language,
a product owner walks in to a bar and says to the barman
"Given I am thirsty
and I am standing at a bar
and there is a beer mat in front of me,
When I ask the bar man for a pint of his best bitter
Then the barman will pour a pint of his best bitter
and place it on the beer mat in front of me".
Just a little bit verbose (not to mention slightly implementation specific) for expressing ordering a pint of best bitter. So my point is BDD is a technique, it is invaluable for exploring the problem domain and capturing scenarios and examples to help understand a problem, however they are not a specification in and of themselves. Using a tool too early to automate these ties you into this form of unnatural expression and eliminates a choice of how to engage with the customer later.
As a team, using the technique in discussions but then using a tool or framework (e.g. xUnit) more suited to the real owner of the executable part (developers) means you can leave the choice of customer facing tool to a more appropriate moment when they actually do want to engage and also benefit from an environment and language the developers are most comfortable with... I've written previously that even working out what you plan to test or demonstrate before working out how to implement it can add immense value as a thought exercise.
There is also another scenario which is by and far the most dangerous... Having browsed the web, we want a cross functional team so we embed a tester into the team to perform this function. The tester works closely with the business analyst and creates/owns functional tests. Most testers are new to development and don't have the skills or experience of the developers to be writing code, and worse, we are trusting these inexperienced developers with writing the most important code in the system, the tests that validate that what we are doing is correct... Inevitably we will end up with an enormous suite of functional tests that are very "script" based, not easy to understand and which add little if any value to the day to day activities of the team.
So to recap, we want to write acceptance tests to validate we are building the right thing (and that once built (or is reimplemented) it continues to work), and we want the customer (or product owner) to "own" them. If any of these are not true in your organisation then seriously ask yourself why are you doing what you are doing and put the customer's hat on and ask would you (as a customer with limited technical knowledge) ever "accept" what is being done on your "behalf"...
Concordion Plus
I've written in the past about Using English to write Acceptance Tests and the tool I choose/advocate is without doubt Concordion. Customers love seeing their stories come alive, but I've found developers can sometimes struggle to differentiate these from JUnit tests, particularly since JUnit is used to provide the execution mechanism.
I've also found that in many of the situations/organisations where I have introduced Concordion, a single story has required several tests and although the Concordion guide does present some excellent techniques to deal with this, teams new to writing acceptance tests will be uncomfortable capturing stories in this format and customers might not be happy that all their acceptance criteria are being met. I am therefore pleased to be releasing Concordion+ to the wild. At the moment it is a simple extension to Concordion which allows developers to create individual scenarios (or test cases) within a single story and also to ignore the scenarios they are actively working on. In addition, a new JUnit runner captures the state of each of these scenarios independently and reports them in the common IDEs while also allowing developers to use the JUnit @Before and @After annotations. This should simplify adoption by developers since they now have a JUnit lifecycle they understand.
I have to send a huge thank you to Nigel Charman for the concordion-extensions project which helped immensely with my development. And of course I can't dare not mention the excellent work by David Peterson on Concordion itself and particularly the facility to write extensions 😉
I hope you enjoy using it as much as I enjoyed creating it...
My Stories Are Bigger Than Your Story
I've always found it a challenge when new teams are adopting scrum but have simply renamed their list of requirements as a product backlog. Scrum provides a nice facade which shows a steady progress of churning through these requirements, but it makes it extremely difficult to measure the tangible business value. This is particularly the case where we have a scrumbut model, especially when done doesn't include production/release.
The progression from a list of requirements to user stories with acceptance criteria is usually the first step I recommend, but this is fraught with danger. Requirements typically have inherent dependencies that don't translate well into stories and requirements are also usually solutions to problems rather than the problems themselves. It is only by uncovering the underlying problems that we can start forgetting about the "requirements" and start providing tangible business value by solving business problems.
The first stab at cutting user stories usually results in very large stories with very vague, subjective acceptance criteria, if indeed there are any acceptance criteria. As teams work on these large stories, the tasks they produce are also probably too big and vague and simply sit forever in the in progress column. This is usually due to blindly trusting and following the scrum process. At this stage I usually challenge teams to stop tracking tasks and instead focus on delivering the stories. This is extremely uncomfortable at first since teams will be struggling to deliver big stories. However, it only takes a sprint or two before the team start focussing on the stories and feel relieved that they don't get bogged down in 3 or 4 hour planning meetings to produce extremely detailed task breakdowns. The tasks are still extremely important to capture and update, but this is more of a real-time activity and no-one gets penalised for adding new tasks, or removing tasks that are not needed any more...
This hybrid scrum/lean model provides much greater opportunity to start introducing new technical practises (e.g. test first, automated acceptance testing, etc) since stories are broken down at the last responsible moment and some stories are natural candidates (comfortable technologies, clearly defined, etc) for trying new things.
The next challenge I usually face is getting stories to a size that is acceptable for the team and the PO. Applying the INVEST model works quite well here as does parking open questions through agreeing to raise future stories as required to limit the scope of the story in question to something that is estimatable. At this point stories can become quite small (which is great IMHO) with perhaps only 1 or 2 acceptance criteria. This for me is the sweet spot. It means the story will probably fit in the 2 hr to 2 day window, it is well understood, it is easy to estimate, it is easy to test and a host of other great things... However, it will also probably invalidate any existing (but also highly dubious) velocity metrics since the team will need to rebaseline...
I've witnessed scrum applied extremely well, when supported with some additional technical practises and good story discovery/acceptance criteria/backlog management, but more often than not in my experience scrum is applied as the smoke and mirrors over the requirements list to demonstrate progress and it's only when you hit the last sprint you realise that you can't integrate/release/deliver despite having completed 90% of the requirements before entering the sprint with only a couple of requirements to go (e.g. Oracle Auditing and Secure Messaging)...
Top Tips – User Centred Automated Acceptance Tests
Following on from my previous article, I thought I would share some top tips for creating automated acceptance tests for projects using user centred design. There is nothing inherent in user centred design which prevents an agile approach being adopted and indeed agile adds huge value to user centred design by reducing the feedback loops during the "design".
The outputs from the "design" part of user centred design is usually a set of wireframes, sometimes highly graphical to be used for marketing and soliciting customer feedback. Low fidelity wireframes, as produced by balsamiq are fantastic for exploring the solution space with minimal commitment and retain that back of a napkin look and feel which prevents misinterpretation by customers and business about how much work remains to be done making them real. At some point, developers will get their hands on these wireframes and be tasked with "making them real". Of course, if the developers can help create the wireframes, then so much better and what a great way to reduce that feedback loop 😉
So, we're a delivery team and we have the wireframes available which we want to use to start carving this up into stories (with a customer ideally), executable tests and working code. Where to start...
First we want to express what we're seeing as stories so we can assign value to the stories and prioritise based on ROI. Depending on the technology choices, these wireframes also help to perform an initial design of the application in terms of data, services, components, etc.
The stories we write can be high level stories capturing the user experience as they navigate through the site, or they can be as low level as which fields are visible within a given area of the site.
My preferred approach is to write stories and tests at the high level and as we uncover the detail capture these as smaller stories and more detailed tests. This seems to fit naturally with user centred design and allows planning and release activities to focus on the high level stories around the user experience (which is probably why they chose to do user centred design in the first place) and allow the development and build activities to focus on delivering the smaller stories.
My top tips would be...
- Write your stories and tests based on what you see in the wireframes
- e.g. When a user (who is not logged in) visits the homepage they can see the "News" in the main area. Notice that the wireframe has a single article, but this could be many articles in the future. There is also no reason to assume this is the "Latest News", "Featured News", "Breaking News" or anything else.
- Be constantly aware of the context (explicit or not) within the wireframe when writing your test
- e.g. In the latest news section, we may see a "Read More" link and assume that we can simply fnid this "Read More" link on the page. Of course, in the future, there may be more than one news article (so we want to find the "Read More" link related to a particular article) and indeed there be more sections that contain "Read More" links such as user comments (so when we implement a test around the news, we want to ensure that we are finding the correct "Read More", which means we look for it in the news section and in relation to the article(s) of interest).
- Don't assume the wireframe is what the final implementation will be so think in abstract terms
- e.g. if there is a button in the wireframe, this may be a clickable image later, or a tab panel. So word your tests not on the button, but on the capability the button provides, so instead of "...and the user can click <Help>" we might have "..and the user is able to obtain <Help>".
- Envisage alternative implementations while writing the tests and validate your test would still be valid
- e.g. If I change this from being a form to being editable inline, does the test still work?
- Try to be discrete in your tests, it is better to have lots of small, simple tests than a small number of tests that test too many things.
- e.g.
- when... can they see today's weather
- when... can they see today's temperature
- when... can they see tomorrow's weather
- e.g.
- Use "Given, When, Then... and" to think about the scenarios you need, but try expressing them in natural english.
- e.g. When someone is viewing the school holidays calendar (Given) and they enter their postcode (When), the calendar should only display holidays for schools within the catchment radius (Then) and the user should be able to remove the filter (And).
- I like to think of "Thens" as being the assertions, and with UCD this is the stuff I want to see on the screen as a result of me doing something.
- The "Ands", which are an extension to Dan North's, I like to think of as the stuff I can now do in addition and in UCD this usually relates to new functionality which has been exposed or links that have appeared.
- Refer to the Concordion Techniques page for some great examples of how to write robust tests in natural language.
- Separate site aspects (navigation, login) from functional or contextual tests
- Refactor your tests as you go to evolve a testing DSL to simplify writing future tests
- Use personas (or metaphors) to encapsulate data
- Bring your personas to life
- Write them a CV, or post their key characteristics on the team wall.
- You can validate key aspects of your persona within your test (in the "Given" section) to provide better tests by clarifying what aspects of the persona you are actually testing
- e.g. "When Joe (who is on legal aid) views the list of local lawyers, the lawyers who provide legal aid are highlighted". Notice that highlighted could mean anything, from promoting them to the top, changing colour, or adding a nice icon...
- Bring your personas to life
- Don't create too many personas
- Personas are very powerful, but when abused become difficult to manage. To test functional aspects related to simple data variations use dynamic personas (e.g. when someone, who has not yet entered their postcode, accesses their profile...)
- Think like a customer
- Ask yourself how much money you would pay for a given feature or piece of functionality
I'm sure that I'll refine this example in the future and therefore watch this space...
Too Many Broths Spoil The Cook
I' ve got several things bubbling over at the moment and because I never bothered capturing them as stories and placing them in a backlog I'm really struggling to make headway with any of them. Some are personal experimentation, others are professional and finally there's a few projects which may end up as open source.
So what's stopping me making this backlog? It's the lack of a single coherent customer to do the prioritisation of course. I could certainly pull out all the things I want to do into a set of stories, but would it be right for me to also play the customer role? I certainly don't want to ask my work to do the prioritisation, since I know which side of the fence they'll lean on...
I think my personal challenge is quite similar to many projects I have witnessed where there has not been a single customer. It becomes extremely difficult to prioritise stories and deliver a single useful end to end feature and instead we deliver lots of little bits and pieces. Of course, switching contexts in itself is extremely expensive in terms of productivity, but unless there is a single customer, with clear goals and a solid backlog of well expressed stories then switching contexts is inevitable.
What do I plan to do? Well first and foremost I'm going to take the baby-step of at least creating my backlog of stories... By definition, these are my stories, so in reality I am the customer, I guess I'm just like all other customers, I'm not a very good one and I like to keep changing my mind - oh well...
My Links
Recent Posts
- My Final Post – Alcoholism, Aspergers and Depression
- Thrive – Surviving in a Corporate Jungle
- Genuinely Excited – A New Chapter
- The Value of Passion
- From Behind Closed Doors
- Thrive – Surviving In A Corporate Jungle
- Top Tips – Advanced Acceptance Test Driven Development
- Why Bother with Automated Acceptance Tests
- Agile, a poem
- Updated – Agile Hitler – He’s Using Git
Blogroll
- Alistair Cockburn
- Andy Hunt
- Ben Hoskins
- Dan North
- Dave Thomas
- David Peterson
- Gojko Adzic
- James Shore
- Jim Highsmith
- Joel On Software
- Martin Fowler
- Michael Feathers
- Nigel Charman
- Object Mentor
- Portia Tung
- Scott Ambler
- Stephen Walther
- Ward Cunningham