Acceptance Test Driven Development (ATDD)

18 April

It’s great when a team of people work together to produce something useful and effective; using ATDD is a customer-focused way of delivering what people want.

Elisabeth Hendrickson (who will be speaking about ATDD at Fusion and STANZ this year) describes ATDD as “a practice in which the whole team collaboratively discusses acceptance criteria, with examples, and then distills them into a set of concrete acceptance tests before development begins. It’s the best way I know to ensure that we all have the same shared understanding of what it is we’re actually building. It’s also the best way I know to ensure we have a shared definition of Done.” (read her blog post on the subject here).

Getting all the relevant people around a table at the beginning of a development project can be hugely beneficial. Instead of working in the style of Chinese whispers (person one describing what they want to person two, person two telling person three, person three building it, person four testing it and then sending it back to person one) where key messages and objectives can be lost along the way, a collaborative, inclusive approach means that every person working on a project sees the whole picture, they don’t just know what they should do, they understand why they’re doing it. As our Software Testing Practice Lead Sharon Robson says, “everyone in the development team (not just the developers) know the final application of the product they are developing and focus their efforts on providing the solution needed, rather than the functionality asked for.”

Think you’ve heard this all before? Well, you might already be working in an Agile way. Sharon says ATDD is the epitome of the Agile Manifesto, she goes so far as to say that “Agile should be called ATDD”.Why? Because of:

  • the focus on the customer needs rather than what they ask for
  • the focus on building just enough to deliver these
  • the focus on the whole team working together to deliver these needs.

So you see, ATDD isn’t a method of testing, in fact you can take the word test out of the equation. If you were told to work with the technique Acceptance Driven Development, what would you think of? What leads to acceptance? Who is accepting? What is acceptable? What needs to be done (functional)? How well does it need to be done (non-functional)? If you don’t have a team where everyone can answer all of these questions, how are you going to deliver the appropriate solution? And when you do answer these questions, the best way to make sure you’re interpreting what the customer wants is to have the customer there, in the room, contributing to the discussion.

In terms of who is accepting what, I found a great definition in this blog post by Amir Kolsky and Scott Bain. Their view is that the word ‘acceptance’ is used in a wide sense:

  1. The customer agrees that if the system, which the team is about to implement, fulfills the acceptance criteria then the work was done properly
  2. The developers accept the responsibility for implementing the system
  3. The testers accept the responsibility for testing the system

They say, “ATDD is a whole-team practice where the team members discuss a requirement and come to an agreement about the acceptance criteria for that requirement… this is a human-oriented interaction that focuses on the customer, identifying their needs.” *read their blog post on this subject here.

So now we ask, who is the customer? Projects can have multiple customers, from stakeholders to end users, operators, administrators, support, sales, testers and developers, it all depends on what you are building and who will interact with it.

So what does an acceptance test look like? Lasse Koskela has written a series of articles that look at how to implement ATDD (you can find them here) In Part 3 the tests are looked at in more detail. Lasse describes acceptance tests as, “specifications for the desired behaviour and functionality of a system. They tell us, for a given user story, how the system handles certain conditions and inputs and with what kind of outcomes.” He gives some examples of what these look like, in summary acceptance tests are:

  • owned by the customer
  • written together with the customer, developer and tester
  • about the what and not the how
  • expressed in the language of the problem domain
  • concise, precise and unambiguous

(read Lasse’s full description of acceptance tests here).

So we’ve established that ATDD is about teams working together to define what an acceptable project outcome is before the project starts. What happens when there is a change? As John Smart points out in his article in Java World, by using this methodology, “acceptance tests are no longer cantoned to the end of the project and performed as an isolated activity. Instead, ATDD tests are automated and fully integrated throughout the development process. As a result, issues are raised faster and can be fixed more quickly and less expensively.” (read John’s article in full here), it also gives examples of some of the tools that can help you on a project using ATDD.

Now that we have a broad understanding of what ATDD involves, we can take a more in-depth look at what it means for testers with the help of Elisabeth Hendrickson (who has written an excellent article about it here). Elisabeth explains that ATDD involves, “creating tests before code, and those tests represent expectations of behaviour the software should have. In ATDD, the team creates one or more acceptance-level tests for a feature before beginning work on it. Typically these tests are discussed and captured when the team is working with the business stakeholder(s) to understand a story on the backlog.” Elisabeth’s article sets out in detail the stages of an ATDD project for a tester, such as writing tests e.g. invalid passwords should result in the error message, “Passwords must be at least six characters long and contain at least one letter, one number and one symbol.”, distilling the tests (focussing on the essence of the test rather than details of implementation) and hooking the test to the code as it is being developed.

Elisabeth’s description of the results of ATDD is so succinct and accurate I’m going to use her words in this instance, “Teams that try ATDD usually find that just the act of defining acceptance tests while discussing requirements results in improved understanding…. Teams that follow the process all the way through, automating the tests as they implement the feature, typically find that the resulting software is more testable in general, so additional automated tests are relatively easy to add. Further, the resulting automated regression tests provide valuable, fast feedback about business-facing expectations.”

Using ATDD every member of a development team is aware of the purpose of a project, they have clarity, understanding and a sense of teamwork and satisfaction that comes from working with others to produce something they can all be proud of. Want to know more? Come to Fusion in Sydney or STANZ in Auckland or Christchurch to hear from Elisabeth Hendrickson, who has been a fan of and an expert in ATDD for many years.

Thank you!

Your details have been submitted and we will be in touch.

CHAT
CALL