The Value of Passion
My last engagement has left me a little scarred and bruised. It has really tested my core agile values and as I reflect on it, I came to a surprising conclusion.
The engagement involved introducing/advocating cloud/virtualisation to improve the testing capabilities within a global tier 1 bank. The bank has all the elements in place and therefore the challenge was not remotely technical but 100% cultural.
The project was driven/under-pinned by a vision rather than a backlog, this was to prevent the project being de-railed by cultural dead-ends or technical side-shows. It wasn't so much we'll know it when we see it, rather than we knew what was acceptable and everything that prevented this could and would be challenged. It was firmly based on devops principles of completely automated, repeatable environments.
Going back to the values, I personally place high value on simplicity, feedback and working software so rather than powerpoint the project to death we developed and released a working solution. This was not a theoretically working solution, but a working solution on real infrastructure provided by the men in black. Sadly, the real infrastructure we were using was in the wrong continent and was only half-heartedly supported by the men in black preventing the transfer of data required for testing and at this point the rails came off.
We had performed live demonstrations and key people were heavily engaged and excited about making use of what we had developed and what should have been a simple lift and shift to the correct continent proved instead to be the slow and excruciatingly painful death of the vision. Everyone agreed with the vision, but culture, policies, processes and bureaucracy all transpired against us.
The first wheel to come off was our use of an unsupported operating system. It was the correct operating system, but it hadn't been built by the men in black so wasn't sufficiently opaque. It took a few months of unpicking and reverse engineering just to get back to some of the basic capabilities that are mandatory for deploying software a'la devops in a highly restrictive financial organisation. Those months did however allow us to get back to where we were and at this stage the feeling was that not only had we built it again, but had also this time built it better as it was more in-line with wider strategies. So finally, we want to get people in there, but wait... We have no disk space 🙁
Popping into your local PC world for a few TBs of disk is easy and will set you back a couple of hundred bucks at most. In a corporate data-centre however disk-space is like gold dust and is charged by the ounce. This was the first stage in the project where we needed real funds and investment. We were at the point where we had a working solution, an eager customer base and genuine excitement. This was the game-changer and we were very, very excited...
What followed sums up the cultural challenges, instead of capitalising on the solution, looking for opportunities to deploy to other groups, we spent months creating detailed business cases, investment plans, roadmaps, etc, to get modest sums to fund the final rollout of the solution. During these months, I had to put my personal values aside in favour of documentation, process and all those other things that are less valuable in agile, but I was playing the long game. Our strategy was realising our vision and that meant enduring these little tactical battles where necessary. What I wasn't prepared for was how demoralising this would be and just how much of my passion would be destroyed in the process. This wasn't a case of everyone required clubbing together to devise a brighter future, it was a horse-trading exercise of compromise and trade-offs.
As I look back now, detached from the project, it would be very easy to view this as a failure; we certainly failed to get the funding or deliver our vision. What we did achieve though was the planting of a seed. It will take several years for the seed to grow, just as agile typically takes a few years to embed itself in a large multi-national organisation (and even there the use of the word agile probably means nothing more than the team do a stand-up each day). I'm hoping that when the time is right, people may be able to dust off a few of my blog articles I wrote explaining how devops can strengthen governance and auditing, or why creating an environment automatically in minutes is better than building one manually over weeks (even if the steps are all self-service).
The bank in question is a bank I personally love. The people are great, the technology (when you can use it) is cutting edge and the challenges are anything but trivial. I had the opportunity to stay at the bank in question and opted not to, for what was a surprise to myself. It wasn't because of the lack of feedback, the skepticism of simplicity, the illusion of control or lack of trust. It was because I lost my passion. It turns out that my most important value is passion and this is the one fire they failed to ignite and instead extinguished.
I have also realised (again) that every single assumption you make at the outset of a project needs to be made explicit and validated. I'm heartened that our project was not a big costly disaster, it was a (relatively) small, well contained experiment in the art of the (im)possible. We delivered working software, but failed to get it into the hands of the users. We found simplicity hiding in a web of complexity. We were open and honest with all our information and everything we did was made available to everyone.
My passion is always to get high quality, working software into the hands of the customer as quickly as possible and delight them. To drive my passion I rely on a my own core values of simplicity, feedback, transparency and trust. I have no doubt the bank in question will deliver yet another highly compromised version of what we have already built and demonstrated twice; I can only hope our original vision remains as the yard-stick.
Top Tips – Advanced Acceptance Test Driven Development
Over the course of my career I've had the pleasure of working with some great agile teams. I've also had some bitter disappointments working with great developers, testers and BAs who just don't get it...
Many of the teams that get it didn't actually use natural language to create executable acceptance tests, however they did have extensive suites of automated acceptance tests, usually written by the business analysts or developers but in a language and style that is not normal for the non agile developers I have encountered. So in an attempt to try to capture the difference I'm going to try to provide some useful tips and techniques to challenge those attempting to adopt acceptance test driven development within a corporate environment.
I will begin by recommending the various conference videos from GTAC. I'm not saying google are doing it perfect (I just don't know), but I am happy to believe they are probably doing lots of things right...
And most important, if we are going to go to the bother of creating executable acceptance tests, think carefully about who is accepting these. If the only person who will accept these (and I mean really accept, as in understand and even be happy to take ownership of it) is the developer, then use the most appropriate tool.
So the tips and techniques...
- Make sure the story is the right story... If you have a story that is purely technical, then it's possibly better to test these using developer tests, it's unlikely to be something the business "really" care about... If the story isn't for a paying customer but for an internal user, try to find out what benefit that internal user is going to provide for the customer and reword the story from the end user perspective.
- Don't clean up after tests... More importantly for acceptance testing is ensuring you know the state of the environment at the beginning of the test and that the test can run based on that state. Leaving the data created by the test can help immensely when issues are found. Given the amount and complexity of changes an acceptance test can inflict on an environment combined with the number of points at which an acceptance test can fail makes cleaning up extremely complex and error prone and does not provide the same level of ROI as unit tests. This has the added benefit of building a business case for better, more flexible environments and continuous delivery...
- Create unique contexts for each test... To prevent tests stepping on each other's toes if they are run in parallel, create a unique context for the test, this could be as simple as creating a user with a unique id for that test or might require creating a collection of unique items you plan to use (e.g. instruments in a trading app, pages in a cms, articles for a blog)
- Don't wait for something, make it happen... Where you come across a situation where you need to wait for something, prod the system to make it happen and use a spin loop so that in an environment where you can't prod the test still passes.
- Question everything, even the test framework... As you develop your acceptance tests, the supporting test framework and ultimately the application continually ask yourself what would happen if you replaced x with y. For a web based application, the questions you might ask could be what would happen if we wanted to make this available on an android device or iphone, does my acceptance test still hold true? Can my test framework support this easily without visiting all the fixtures? What if change the test framework I use?
- Use the english language to drive the domain model... Good acceptance tests usually make it explicit the domain model needed to support the testing, and more often than not this drives the actual domain model needed within the application.
- Use the real application code if at all possible... Rather than completely decouple your tests from the implementation, use the real implementation at the appropriate layer. This adds the benefit that changes to the implementation require no changes to the tests. To achieve this requires a suitably layered test framework to prevent these implementation changes rippling too far up resulting in broken tests. The best candidates for reuse are typically the domain models, data access components and service interfaces.
- Assume you are running everything on your own machine until you can't... Start with the assumption that everything you need is running on your local development machine, since ultimately the goal is you can actually run these tests locally to test the functionality works. Once you have a test running and passing locally, you know the functionality is working and are then in a better place to refactor the test to support different environments.
- Keep the tests isolated... Don't try to optimise tests by adding additional verifications or steps to existing stories. Create new tests. This might expose problems running the tests quickly but explore the other solutions to this rather than create huge tests that test too much. And imagine how the business will react when you say you are running 500 business tests and are getting 100% pass rate but can't test their new features because you don't have enough kit...
- Don't write the test at all... If the story doesn't have much value, or the the systems you are using are not in your control and are not test friendly then stop just short of automating it... Work out how you might automate it, the exercise will highlight the blockers and drive a better story and clearer acceptance criteria, but weight up the cost of writing, maintaining and executing the test against the value of the story and the true cost/likelihood should a defect occur in that story...
I'm sure a few of these will feel a little controversial or sit uncomfortably dependent on your experience. I'm also sure some appear on the face of it to conflict with others. For those who reach nirvana, you will end up with a suite of extremely robust acceptance tests (owned and fully understood by, the business), which a developer can run locally before committing code and which are then run again in a virtual production like cloud.
Enterprise Agile – Evolutionary Standards
At the risk of being lambasted by the agile community I will use the words enterprise and agile in the same sentence 😉
This article largely follows on from some previous entries and in particular my entry on user centred test driven development.
It is often a complaint that large organisations trundle along painfully and slowly. Work can't start without following some process or other until you have sign-off. Part of this sign-off will probably involve agreement to follow certain standards and guidelines, but if these standards don't yet exist how can we start???
To challenge this and present an alternative approach, why not make the "standards" part of the delivery itself. Make it clear up front that rather than wait for the standards to be released (which would be the normal mode of attack in large organisations) you will actively work with whichever standard's body exists in the organisation to evolve just enough standards to support the actual work you are doing as you work through the backlog.
To make this work, COURAGE is imperative... Someone has to have the courage to put a stake in the ground early, recognising there is a small risk this may change. Developers should embed the standards into their automated testing as early as possible, this means that when and if a standard does change, there are tests in place which will assist developers in ensuring that all work to date is easily brought up to date...
The results of this is a design language that everyone can understand, when someone says they are writing a test which is looking for the jobs tag in the currently featured news article, everyone should know what that refers to in the wireframes, and also know how this will be identified and marked up in the implementation. This allows tests to be written before any code and even for the final "Look And Feel" to progress alongside development.
Of course, you're always free to continue in the traditional model and wait for three months until the standards body within the organisation produces a 300 page guidelines document before even starting that killer new feature that will storm the market... Or make totally random guesses, which are much more likely to be wrong, and be safe in the knowledge you have the traditional saviour of projects - Hope and Prayer!!!