Agile Insider reality bytes…

22Apr/100

Enterprise Agile – Evolutionary Standards

Low Tech Evolutionary Standards

At the risk of being lambasted by the agile community I will use the words enterprise and agile in the same sentence 😉

This article largely follows on from some previous entries and in particular my entry on user centred test driven development.

It is often a complaint that large organisations trundle along painfully and slowly.  Work can't start without following some process or other until you have sign-off.  Part of this sign-off will probably involve agreement to follow certain standards and guidelines, but if these standards don't yet exist how can we start???

To challenge this and present an alternative approach, why not make the "standards" part of the delivery itself.  Make it clear up front that rather than wait for the standards to be released (which would be the normal mode of attack in large organisations) you will actively work with whichever standard's body exists in the organisation to evolve just enough standards to support the actual work you are doing as you work through the backlog.

To make this work, COURAGE is imperative...  Someone has to have the courage to put a stake in the ground early, recognising there is a small risk this may change.  Developers should embed the standards into their automated testing as early as possible, this means that when and if a standard does change, there are tests in place which will assist developers in ensuring that all work to date is easily brought up to date...

The results of this is a design language that everyone can understand, when someone says they are writing a test which is looking for the jobs tag in the currently featured news article, everyone should know what that refers to in the wireframes, and also know how this will be identified and marked up in the implementation.  This allows tests to be written before any code and even for the final "Look And Feel" to progress alongside development.

Of course, you're always free to continue in the traditional model and wait for three months until the standards body within the organisation produces a 300 page guidelines document before even starting that killer new feature that will storm the market...  Or make totally random guesses, which are much more likely to be wrong, and be safe in the knowledge you have the traditional saviour of projects - Hope and Prayer!!!

20Apr/100

Keeping It Simple – Regression vs Acceptance Testing

Another emergn coach asked me the other day how I distinguished between an acceptance test and regression tests.  For me there is a very simple rule...

  • If I write the test before I write any code, it's an acceptance test.
  • If I write the test after I've written the code, it's a regression test.
  • If I write code to make an acceptance test pass, it is now a regression test.

Keeping it as simple as this keeps you out of trouble, I've seen so many people try to retro-fit acceptance tests after they've written code only to write a test which is based on what they've written rather than what they should have written.  It's a subtle but important point that writing a test for stuff you've written (which might be wrong since you haven't got an acceptance test) means you are potentially writing a test that the system always does the wrong thing...

20Apr/100

Top Tips – User Centred Automated Acceptance Tests

Following on from my previous article, I thought I would share some top tips for creating automated acceptance tests for projects using user centred design.  There is nothing inherent in user centred design which prevents an agile approach being adopted and indeed agile adds huge value to user centred design by reducing the feedback loops during the "design".

The outputs from the "design" part of user centred design is usually a set of wireframes, sometimes highly graphical to be used for marketing and soliciting customer feedback.  Low fidelity wireframes, as produced by balsamiq are fantastic for exploring the solution space with minimal commitment and retain that back of a napkin look and feel which prevents misinterpretation by customers and business about how much work remains to be done making them real.  At some point, developers will get their hands on these wireframes and be tasked with "making them real".  Of course, if the developers can help create the wireframes, then so much better and what a great way to reduce that feedback loop 😉

An example wireframe...

So, we're a delivery team and we have the wireframes available which we want to use to start carving this up into stories (with a customer ideally), executable tests and working code.  Where to start...

First we want to express what we're seeing as stories so we can assign value to the stories and prioritise based on ROI.  Depending on the technology choices, these wireframes also help to perform an initial design of the application in terms of data, services, components, etc.

The stories we write can be high level stories capturing the user experience as they navigate through the site, or they can be as low level as which fields are visible within a given area of the site.

My preferred approach is to write stories and tests at the high level and as we uncover the detail capture these as smaller stories and more detailed tests.  This seems to fit naturally with user centred design and allows planning and release activities to focus on the high level stories around the user experience (which is probably why they chose to do user centred design in the first place) and allow the development and build activities to focus on delivering the smaller stories.

My top tips would be...

  • Write your stories and tests based on what you see in the wireframes
    • e.g. When a user (who is not logged in) visits the homepage they can see the "News" in the main area.  Notice that the wireframe has a single article, but this could be many articles in the future.  There is also no reason to assume this is the "Latest News", "Featured News", "Breaking News" or anything else.
  • Be constantly aware of the context (explicit or not) within the wireframe when writing your test
    • e.g. In the latest news section, we may see a "Read More" link and assume that we can simply fnid this "Read More" link on the page.  Of course, in the future, there may be more than one news article (so we want to find the "Read More" link related to a particular article) and indeed there be more sections that contain "Read More" links such as user comments (so when we implement a test around the news, we want to ensure that we are finding the correct "Read More", which means we look for it in the news section and in relation to the article(s) of interest).
  • Don't assume the wireframe is what the final implementation will be so think in abstract terms
    • e.g. if there is a button in the wireframe, this may be a clickable image later, or a tab panel.  So word your tests not on the button, but on the capability the button provides, so instead of "...and the user can click <Help>" we might have "..and the user is able to obtain <Help>".
  • Envisage alternative implementations while writing the tests and validate your test would still be valid
    • e.g. If I change this from being a form to being editable inline, does the test still work?
  • Try to be discrete in your tests, it is better to have lots of small, simple tests than a small number of tests that test too many things.
    • e.g.
      • when... can they see today's weather
      • when... can they see today's temperature
      • when... can they see tomorrow's weather
  • Use "Given, When, Then... and" to think about the scenarios you need, but try expressing them in natural english.
    • e.g.  When someone is viewing the school holidays calendar (Given) and they enter their postcode (When), the calendar should only display holidays for schools within the catchment radius (Then) and the user should be able to remove the filter (And).
    • I like to think of "Thens" as being the assertions, and with UCD this is the stuff I want to see on the screen as a result of me doing something.
    • The "Ands", which are an extension to Dan North's, I like to think of as the stuff I can now do in addition and in UCD this usually relates to new functionality which has been exposed or links that have appeared.
  • Refer to the Concordion Techniques page for some great examples of how to write robust tests in natural language.
  • Separate site aspects (navigation, login) from functional or contextual tests
  • Refactor your tests as you go to evolve a testing DSL to simplify writing future tests
  • Use personas (or metaphors) to encapsulate data
    • Bring your personas to life
      • Write them a CV, or post their key characteristics on the team wall.
    • You can validate key aspects of your persona within your test (in the "Given" section) to provide better tests by clarifying what aspects of the persona you are actually testing
      • e.g. "When Joe (who is on legal aid) views the list of local lawyers, the lawyers who provide legal aid are highlighted".  Notice that highlighted could mean anything, from promoting them to the top, changing colour, or adding a nice icon...
  • Don't create too many personas
    • Personas are very powerful, but when abused become difficult to manage.  To test functional aspects related to simple data variations use dynamic personas (e.g. when someone, who has not yet entered their postcode, accesses their profile...)
  • Think like a customer
    • Ask yourself how much money you would pay for a given feature or piece of functionality

I'm sure that I'll refine this example in the future and therefore watch this space...

19Apr/100

“Natural Language” Automated Acceptance Testing

I Don't Understand

Do you speak FIT?

I read with extreme interest James Shore's blog about FIT but was dismayed that he devalues automated acceptance testing.  To claim that FIT is a "natural language" is wrong, it is a developer language and this is possibly why customers don't get involved.  Concordion on the other hand is natural language and I think plays much better in this arena.  In addition it is much more developer friendly.

I've written previously that for me the value of test first is the thought processes surrounding it, however,  where applicable converting these into automated tests, and in particular automated acceptance tests is a huge win.  I would love having a customer "own" the tests, but when this isn't possible (almost always) I will try to put my "customer" hat on and think like the customer and express what I'm about to do in their language (which will be English, not FITnese, or selenese, or rspec).  If the customer is happy with my specification, I can then use this directly as my test.

So for me, the lack of customer isn't the problem, but I agree with James on one point, there is a problem...

It's the people...  The majority of developers I've encountered can't think like the "Customer" and instead thrive on complexity.  They can't express the problem or solution correctly and write tests that become implementation specific.  This means they have written a test for a specific solution, where actually there could be a multitude of solutions, even in the same technology.  When they then 'implement' this solution and the customer doesn't like it, the test can't be reused and needs to be 'reworked' (I'm avoiding refactored, since the test was actually wrong, and therefore it should be fixed, not refactored).  This is the problem, the test may be rewritten many times at which point the customer will be thinking, this is now the n'th time I've asked for this exact same feature and I've seen five different versions of a test for the same thing, none of which are producing what I'm asking for.  If I was that customer would I want to own these "tests" which seem to be so difficult to change and can produce such a burden to tweak the implementation.

So for me, if I don't know what I'm doing, I won't do it and will ask for help from someone who does know what they're doing.  I would encourage all developers to have the courage to admit when they are out of their depth with a practise and seek advice rather than struggle on developing the wrong thing which ultimately ends up having little value.

I forever find myself coming back to the five values, and when I measure FIT against simplicity, communication and feedback it would come in at "Good, could do better"...