Podcast Released: Acceptance Criteria


#1

Listen Here

Join Colleen Johnson (@scrumhive), Jay Hrcsko (@backdooragile), James Gifford (@scrummando) and Andrew Leff (@LeanLeff) on this user submitted topic: “What are Acceptance Criteria?”.

The show discusses the origins of AC, what good AC looks like and some of the anti-patterns around user acceptance criteria.

Read more at http://agileuprising.libsyn.com/acceptance-criteria#yADekWjJJWCZxmOp.99


#2

This was thought-provoking. All the ways people think of them - the purpose, the form, who writes them - the answer is often, it seems, ‘it depends’ :smiley:
One comment/question: I hadn’t thought that the acceptance criteria SHOULD define the whole extent of what should be tested, only the stated functionality in the story. If it’s about login, I would have expected QA to write up and execute the negative test cases just based on expected due diligence around authentication. But then, you were saying, remains the question, how does the developer know to cover the negative use cases? IS all that expected to be stated within a given story? Is there no prior/archived knowledge expected, for something like authentication? At least, to be discussed, perhaps resulting in a task under the story?


#3

Everything you are describing is a potential use case, but sometimes stories need more. While not all quality engineers feel this way, many I know don’t feel that “test cases” are something to write when you have things like CD pipelines, thoughtful unit tests to validate the acceptance criteria (i.e. TDD), and end to end tests in the pipeline.

Now, to accomplish that you can put those things I listed above in two places: in the acceptance criteria itself or capture them as part of the definition of done. If the team is aligned there, then the acceptance criteria can be more focused on capturing use cases, error handling, UI criteria, things of that nature.

Make sense?


#4

(I haven’t listened to the podcast yet. I’m responding to the headline and other comments.)

I revisited c2.com lately to put the phrase “Acceptance Criteria” into perspective with all I’ve learned and read about XP, Scrum, etc. Here’s what I’ve concluded:

  • Early posts about User Story at c2.com include phrases like “conditions of satisfaction” and “acceptance tests”.
  • “acceptance tests” refer to automated tests, often but not exclusively written before production code (TDD at the level of the user-interface which has become more commonly known as ATDD, BDD, Spec by Example)
  • “Acceptance Criteria” (capitalized and all) shows up later and introduced not by the same authors as the early User Story discussions (who don’t capitalize it, generally)

Google tells us that the phrase “acceptance criteria” shows up on this page (https://en.wikibooks.org/wiki/RUP_-_IBM_Rational_Unified_Process/Phases) more than the entire c2.com wiki.

I’m a semantics nut, so I’ve come to use each phrase very carefully and deliberately:

  1. “Conditions of satisfaction”: sometimes users and business stakeholders express specific conditions which must be satisfied. The back of a story card is a logical place to capture those expressions with brief notes.
  2. “Acceptance Tests”: automated tests which prove the desired UI behaviour
  3. “Acceptance Criteria”: a phrase borne from development methods in which design, implementation, and testing are segregated either by time, people, or both. As in: before execution begins, let’s define the criteria by which the feature shall be tested (many months from now) during UAT phase so that engineers know how their implementation will be deemed acceptable.

I intend to listen to the podcast – perhaps your perspective complements my own.