I've used Virtual Technologies for several years now and recently in the Context of Continuous Delivery and automated Acceptance Testing I've really seen the benefits come to life. The latest buzz is obviously the cloud and I had a few hours spare last Friday so thought I'd test my knowledge.
I looked around for official certification and was dismayed at the high costs (in both time and money to attend a mandatory course and the exams) to become VMWare certified. I was also disappointed at the non vendor neutrality of many of the certification paths. However I was lucky enough to stumble on the RackSpace CloudU certification and a few hours later I have my certificate.
The content was perhaps a little light and high level, covering topics such as
- the key cloud layering models - Software As A Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service ( IaaS)
- the key cloud deployment/operating models - Private Cloud, Public Cloud, Hybrid Cloud
However the certificate was vendor neutral and also highlighted some of the key business benefits and cost savings that moving to the cloud can produce. I actually enjoyed the content and despite not placing much personal value on certification (Certified Scrum Master anyone?), I did find this a worthwhile exercise.
I will however be exploring more technical cloud certification paths and EMC are currently top of my radar...
Over the course of my career I've had the pleasure of working with some great agile teams. I've also had some bitter disappointments working with great developers, testers and BAs who just don't get it...
Many of the teams that get it didn't actually use natural language to create executable acceptance tests, however they did have extensive suites of automated acceptance tests, usually written by the business analysts or developers but in a language and style that is not normal for the non agile developers I have encountered. So in an attempt to try to capture the difference I'm going to try to provide some useful tips and techniques to challenge those attempting to adopt acceptance test driven development within a corporate environment.
I will begin by recommending the various conference videos from GTAC. I'm not saying google are doing it perfect (I just don't know), but I am happy to believe they are probably doing lots of things right...
And most important, if we are going to go to the bother of creating executable acceptance tests, think carefully about who is accepting these. If the only person who will accept these (and I mean really accept, as in understand and even be happy to take ownership of it) is the developer, then use the most appropriate tool.
So the tips and techniques...
- Make sure the story is the right story... If you have a story that is purely technical, then it's possibly better to test these using developer tests, it's unlikely to be something the business "really" care about... If the story isn't for a paying customer but for an internal user, try to find out what benefit that internal user is going to provide for the customer and reword the story from the end user perspective.
- Don't clean up after tests... More importantly for acceptance testing is ensuring you know the state of the environment at the beginning of the test and that the test can run based on that state. Leaving the data created by the test can help immensely when issues are found. Given the amount and complexity of changes an acceptance test can inflict on an environment combined with the number of points at which an acceptance test can fail makes cleaning up extremely complex and error prone and does not provide the same level of ROI as unit tests. This has the added benefit of building a business case for better, more flexible environments and continuous delivery...
- Create unique contexts for each test... To prevent tests stepping on each other's toes if they are run in parallel, create a unique context for the test, this could be as simple as creating a user with a unique id for that test or might require creating a collection of unique items you plan to use (e.g. instruments in a trading app, pages in a cms, articles for a blog)
- Don't wait for something, make it happen... Where you come across a situation where you need to wait for something, prod the system to make it happen and use a spin loop so that in an environment where you can't prod the test still passes.
- Question everything, even the test framework... As you develop your acceptance tests, the supporting test framework and ultimately the application continually ask yourself what would happen if you replaced x with y. For a web based application, the questions you might ask could be what would happen if we wanted to make this available on an android device or iphone, does my acceptance test still hold true? Can my test framework support this easily without visiting all the fixtures? What if change the test framework I use?
- Use the english language to drive the domain model... Good acceptance tests usually make it explicit the domain model needed to support the testing, and more often than not this drives the actual domain model needed within the application.
- Use the real application code if at all possible... Rather than completely decouple your tests from the implementation, use the real implementation at the appropriate layer. This adds the benefit that changes to the implementation require no changes to the tests. To achieve this requires a suitably layered test framework to prevent these implementation changes rippling too far up resulting in broken tests. The best candidates for reuse are typically the domain models, data access components and service interfaces.
- Assume you are running everything on your own machine until you can't... Start with the assumption that everything you need is running on your local development machine, since ultimately the goal is you can actually run these tests locally to test the functionality works. Once you have a test running and passing locally, you know the functionality is working and are then in a better place to refactor the test to support different environments.
- Keep the tests isolated... Don't try to optimise tests by adding additional verifications or steps to existing stories. Create new tests. This might expose problems running the tests quickly but explore the other solutions to this rather than create huge tests that test too much. And imagine how the business will react when you say you are running 500 business tests and are getting 100% pass rate but can't test their new features because you don't have enough kit...
- Don't write the test at all... If the story doesn't have much value, or the the systems you are using are not in your control and are not test friendly then stop just short of automating it... Work out how you might automate it, the exercise will highlight the blockers and drive a better story and clearer acceptance criteria, but weight up the cost of writing, maintaining and executing the test against the value of the story and the true cost/likelihood should a defect occur in that story...
I'm sure a few of these will feel a little controversial or sit uncomfortably dependent on your experience. I'm also sure some appear on the face of it to conflict with others. For those who reach nirvana, you will end up with a suite of extremely robust acceptance tests (owned and fully understood by, the business), which a developer can run locally before committing code and which are then run again in a virtual production like cloud.
I'm about to write a few articles covering some advanced acceptance testing techniques. I don't plan to get into the nitty gritty technical details and instead want to discuss the why's... For some great material around acceptance testing I highly recommend looking at the Concordion techniques page and can't speak highly enough of Gojko Adzic and recommend you look at his blog and in particular the comments to his posts.
The question I want to ask is slightly more philosophical. Why are we really writing automated acceptance tests and who are they really for?
In an acceptance test driven environment, the acceptance tests help ensure you have solved the problem and developer tests help ensure you are solving the problem the right way. To validate you are solving the right problem we need to express the tests in a way which doesn't tie you to a particular implementation so we probably want to drive this more from a user experience and in particular the functionality we expose to the user as well as what the user can expect when using that functionality. So we are writing acceptance tests that check that the functionality we are making available to our customers is working correctly, without worrying how we will provide that functionality, but does that mean we are expecting our customer to "accept" those acceptance tests?
In agile teams you probably have a product owner and in an ideal world we would want the product owner to "own" these acceptance tests. More often than not, the product owner will happily own the story but will delegate owning the specifics (which sadly often includes testing) to a business analyst. Our goal is to get the product owner to own these tests, but with a business analyst in the way we are probably already at the stage where any tests will be implementation specific, since the business analyst is probably doing exactly that, working out how to solve the problem... In fact, business analysts probably don't want to own the tests either which leaves the team...
Let's reflect for a moment... We want the customer or product owner to own acceptance tests, but instead it usually ends up being the team that owns them, so let's explore what typically happens... The team search the web for acceptance testing techniques, they come across BDD and see there are a wealth of tools out there supporting BDD. They pick up a tool (cucumber, jbehave, etc) and all tests are now captured and represented in Pseudo english in the hope that the product owner or business analyst can start creating these tests themselves. I've yet to meet a product owner or business analysts (or indeed a developer) who uses this style of language,
a product owner walks in to a bar and says to the barman
"Given I am thirsty
and I am standing at a bar
and there is a beer mat in front of me,
When I ask the bar man for a pint of his best bitter
Then the barman will pour a pint of his best bitter
and place it on the beer mat in front of me".
Just a little bit verbose (not to mention slightly implementation specific) for expressing ordering a pint of best bitter. So my point is BDD is a technique, it is invaluable for exploring the problem domain and capturing scenarios and examples to help understand a problem, however they are not a specification in and of themselves. Using a tool too early to automate these ties you into this form of unnatural expression and eliminates a choice of how to engage with the customer later.
As a team, using the technique in discussions but then using a tool or framework (e.g. xUnit) more suited to the real owner of the executable part (developers) means you can leave the choice of customer facing tool to a more appropriate moment when they actually do want to engage and also benefit from an environment and language the developers are most comfortable with... I've written previously that even working out what you plan to test or demonstrate before working out how to implement it can add immense value as a thought exercise.
There is also another scenario which is by and far the most dangerous... Having browsed the web, we want a cross functional team so we embed a tester into the team to perform this function. The tester works closely with the business analyst and creates/owns functional tests. Most testers are new to development and don't have the skills or experience of the developers to be writing code, and worse, we are trusting these inexperienced developers with writing the most important code in the system, the tests that validate that what we are doing is correct... Inevitably we will end up with an enormous suite of functional tests that are very "script" based, not easy to understand and which add little if any value to the day to day activities of the team.
So to recap, we want to write acceptance tests to validate we are building the right thing (and that once built (or is reimplemented) it continues to work), and we want the customer (or product owner) to "own" them. If any of these are not true in your organisation then seriously ask yourself why are you doing what you are doing and put the customer's hat on and ask would you (as a customer with limited technical knowledge) ever "accept" what is being done on your "behalf"...
I thought I'd have a little blast at poetry for fun...
Agile is not a Gift I can Give,
Nor is it a Method I can Teach,
It is a Choice You must willingly Take,
And a Journey You are willing to Make.
The road Never ends,
It Twists and it Turns,
But the Road is your Road,
And it's your Trail which Blazes.
Don't be a Passenger,
Don't pay a Chauffeur,
Grab hold of the wheel,
And Pick your own Pace.
Take those Detours,
Enjoy the Delights,
Splash in the Fountains,
Chase those Green Lights.
My current contract ends in a few more days so I'm taking the opportunity to dust off my worn copy of Rework by 37 signals. I have to make a long overdue thanks to Craig Davidson, an outstanding agile developer I encountered in a previous engagement.
It's not a traditional agile book by any means, but the facts that are presented within the book resonate strongly with my agile values and I find it has helped me immensely to keep grounding myself between contracts. I am now constantly surprised just how many paper-cuts I have personally accepted at each engagement and am equally surprised at my own personal level of intolerance now. I'm actually thinking of requesting a discount from the authors since I now use this book as a gift I give almost routinely...
I challenge anyone not to find the book invaluable at challenging their own current view of the world.
So, once more, and I must apologise profusely for the tardiness, thank you so much Craig...
I've written in the past about Using English to write Acceptance Tests and the tool I choose/advocate is without doubt Concordion. Customers love seeing their stories come alive, but I've found developers can sometimes struggle to differentiate these from JUnit tests, particularly since JUnit is used to provide the execution mechanism.
I've also found that in many of the situations/organisations where I have introduced Concordion, a single story has required several tests and although the Concordion guide does present some excellent techniques to deal with this, teams new to writing acceptance tests will be uncomfortable capturing stories in this format and customers might not be happy that all their acceptance criteria are being met. I am therefore pleased to be releasing Concordion+ to the wild. At the moment it is a simple extension to Concordion which allows developers to create individual scenarios (or test cases) within a single story and also to ignore the scenarios they are actively working on. In addition, a new JUnit runner captures the state of each of these scenarios independently and reports them in the common IDEs while also allowing developers to use the JUnit @Before and @After annotations. This should simplify adoption by developers since they now have a JUnit lifecycle they understand.
I have to send a huge thank you to Nigel Charman for the concordion-extensions project which helped immensely with my development. And of course I can't dare not mention the excellent work by David Peterson on Concordion itself and particularly the facility to write extensions
I hope you enjoy using it as much as I enjoyed creating it...
I've always found it a challenge when new teams are adopting scrum but have simply renamed their list of requirements as a product backlog. Scrum provides a nice facade which shows a steady progress of churning through these requirements, but it makes it extremely difficult to measure the tangible business value. This is particularly the case where we have a scrumbut model, especially when done doesn't include production/release.
The progression from a list of requirements to user stories with acceptance criteria is usually the first step I recommend, but this is fraught with danger. Requirements typically have inherent dependencies that don't translate well into stories and requirements are also usually solutions to problems rather than the problems themselves. It is only by uncovering the underlying problems that we can start forgetting about the "requirements" and start providing tangible business value by solving business problems.
The first stab at cutting user stories usually results in very large stories with very vague, subjective acceptance criteria, if indeed there are any acceptance criteria. As teams work on these large stories, the tasks they produce are also probably too big and vague and simply sit forever in the in progress column. This is usually due to blindly trusting and following the scrum process. At this stage I usually challenge teams to stop tracking tasks and instead focus on delivering the stories. This is extremely uncomfortable at first since teams will be struggling to deliver big stories. However, it only takes a sprint or two before the team start focussing on the stories and feel relieved that they don't get bogged down in 3 or 4 hour planning meetings to produce extremely detailed task breakdowns. The tasks are still extremely important to capture and update, but this is more of a real-time activity and no-one gets penalised for adding new tasks, or removing tasks that are not needed any more...
This hybrid scrum/lean model provides much greater opportunity to start introducing new technical practises (e.g. test first, automated acceptance testing, etc) since stories are broken down at the last responsible moment and some stories are natural candidates (comfortable technologies, clearly defined, etc) for trying new things.
The next challenge I usually face is getting stories to a size that is acceptable for the team and the PO. Applying the INVEST model works quite well here as does parking open questions through agreeing to raise future stories as required to limit the scope of the story in question to something that is estimatable. At this point stories can become quite small (which is great IMHO) with perhaps only 1 or 2 acceptance criteria. This for me is the sweet spot. It means the story will probably fit in the 2 hr to 2 day window, it is well understood, it is easy to estimate, it is easy to test and a host of other great things... However, it will also probably invalidate any existing (but also highly dubious) velocity metrics since the team will need to rebaseline...
I've witnessed scrum applied extremely well, when supported with some additional technical practises and good story discovery/acceptance criteria/backlog management, but more often than not in my experience scrum is applied as the smoke and mirrors over the requirements list to demonstrate progress and it's only when you hit the last sprint you realise that you can't integrate/release/deliver despite having completed 90% of the requirements before entering the sprint with only a couple of requirements to go (e.g. Oracle Auditing and Secure Messaging)...
Her friend has a kindle and the other night my wife asked whether I would be able to transfer one of the books she just finished reading from her reader on to her friends kindle. I wasn't entirely sure I could, since my first thought was drm is probably going to prevent this. My second thought was why??? In the non digital world, once my wife has finished a book there would be nothing preventing her giving it to her friend but in the digital world this isn't easy, but why... Surely it should be possible for her to transfer her rights to someone else... Of course, someone will probably point out that this is already possible, but it was my next thought that really got me.
I wouldn't say we were minimalist, however we do tend to have a 'clean out' regularly which usually involves a trip to a local charity. We religiously donate the books we have read, the toys we have replaced and various other things. But now that my wife is using a reader there are no books to donate
My solution, www.digitalcharityshop.org. It doesn't exist yet, but I've bought the domain and intend to find like minded individuals to help me set this up to receive unwanted digital content and resell them with all profits going to charity. If you'd like to help me on this journey I'll be extremely grateful
Of course, this won't just include ebooks, the solution is also begging to include any digital content - movies, mp3, software, apps... And just to be entirely clear, I'm not interested in £1 of each purchase goes to charity type deals, it all goes to charity...
So can you help?
I've just read a fantastic debate on the usability of Git (or rather the lack of) and it reminds me that most technical folks I encounter are too clever for their own good and will constantly introduce complexity for the sake of it (or to keep their souls clear from the devil - users).
The problem with this is that these wannabe rocket-scientists are fundamentally too stupid to appreciate that real people may not care about the elegant layers of abstraction or powerful combinations of wacky commands that can yield amazing powers. What they want is a simple solution to the problem they have expressed. The technical folks I admire are the technical folks who can create a simple, usable solution in a format that is acceptable and digestible by the end user.
As Einstein quoted "Any intelligent fool can make things bigger and more complex... It takes a touch of genius - and a lot of courage to move in the opposite direction.", which indicates the technical community appears to be filled with lots of intelligent fools...
And just to illustrate this further, I would include Spring in the mix here as an (actually it's a collection of) overly complex solution(s) to a fundamentally simple problem space, which given the size of the user community indicates their are a lot of intelligent sheep...