Stop all manual QA. Urgent!


I love QA. I admire testers’ discerning ways of thinking. QA is essential.

But manual QA is life-threatening and the bane of agility.


  • Driverless cars: If teams are producing software right now for self-driving cars and they do not practice TDD…then I don’t want a future with self-driving cars.
  • Same is true of systems which administer financial transactions, medical reports, tax calculations, democratic voting systems, everything.

Two weeks ago I met the CTO of a medical imaging company Their software takes MRI data as input and produces 3-dimensional maps of a human brain – their tools are designed specifically for neurosurgeons who operate on brain tumors. I asked him about their engineering practices of course! I was excited… I thought surely they’ve got serious automated QA practices in place to ensure the accuracy of the data, the fidelity of the image, and the precision of the “map”. (Their software provides neurosurgeons with “routes” through the brain tissue to the tumor – think of a Google Map [but in 3D] with route suggestions.) Turns out… they have manual QA “teams” scattered offshore who are regularly 3 to 7 weeks behind the development. That is, code is regularly contributed but feedback from manual QA staff is often 7 weeks later. Also, because the test cases are so complicated and numerous, their QA staff are regularly testing only the most recent adjustments/features.

I was shocked. I got emotional and actually caused a bit of a scene on the train. Like, “people’s lives are on the line!!” I said. “How can you stand being liable for that?”

You know what his response was? He said, “We only provide information to the surgeons. The decisions are their own.”


Bane of Agility

I’ve recently published a principle that states “As the intervals between deployments decrease, quality increases, and the amount & cost of technical debt decreases, and competitive advantage accumulates.” I called it the “Principle of Cumulative Quality Advantage” – it’s documented here with a nice graph and stuff.

Essentially, manual QA destroys agility. Why? Because manual QA is far more expensive than automated testing (especially if calculated on a “per test run” basis) so most organizations who perform manual QA simply do not run all tests frequently. So technical-debt increases as teams decide to defer both the creation and execution of tests, so quality decreases in unknown and unpredictable ways, thus deployments grow less frequent. When deployments are less frequent, the organization loses the ability to submit incremental improvements to their codebase safely and their ability to recover or patch critical flaws decreases.

Stop all manual QA. Instead, invest in cross-functionality in which QA & code are simultaneous. TDD is a well-known practice and shouldn’t be considered optional anymore.


Thanks for post David!

At dinner the other day you mentioned that you had No QA’s on 2 of the teams you worked with. Can you talk about that? Something i’m very interested in exploring.

What are the advantages?
Possible disadvantages? (and ways to overcome them)

And lastly if the industry trends that way, where do you see the QA role going or people doing in the future.


This reiterates why, for me, Agile or “business agility” or whatever we want to call the act of developing excellent software is about technical practices first and foremost.

I posed a similar question when Vic Bonacci was gathering input for his third deck of “conversation starters”:

The link to the picture is in the date-time line above, but it reads:
“Is Quality Assurance an inhibitor to developing with agility?”

The question obviously meant to challenge the assumption of what “quality assurance” might be, as your post clearly identifies.


I had a coworker say to me once “Nobody gets promoted by cutting a check”. I’ve been part of many Enterprise-size organizations that have a large depth and breadth of legacy systems, and the biggest complaint I hear about automated testing is the cost with building out automation in legacy systems. I’ve heard multiple executives say “more automation!” out of one side of their mouth, and out of the other “I’m not paying extra money!”; the only way to truly become “agile” and utilize continuous integration & delivery successfully requires automated testing.

That being said, manual QA has it’s uses and I think there’s always going to be a role, but it should be more exploratory and trying to recreate “stupid user tricks” than the rote “does button A work” testing.


Well I certainly agree that QA can’t be weeks behind dev. TDD is great and our office does it but you definitely need other sets of eyes. It does not catch genuine mistakes or things that disagree with or don’t address the business need. I also don’t think it’s sufficient without other automation around the various layers of the product as well. Our process is basically like - TDD, automation of regression at service and/or front end level, automation of new features ahead of time if possible, manual exploratory testing of features that touch the UI (as these are more expensive and time consuming to automate - so automate basic scenarios and then anything discovered in exploration). Finally there are things that are just very costly or impossible to automate (examples are integrations with third party apps that are not automation friendly)


Manual QA in an inhibitor. The job titles, “Quality Assurance Specialist/Technician/Engineer” are inhibitors.

QA (the practice) is an enabler. Automated-QA is required. (And mostly I mean TDD and ATDD, but I’m not excluding other automated testing such as penetration and security testing, load and performance testing, etc.)


Seems a worthwhile place to bring “exploratory testing” into the discussion.


Hi Zach,

I consider TDD and ATDD to be “design” activities. Those activities are performed prior to implementation and help people discuss and make decisions about implementation. Exploratory testing is a “research” activity.

For example, I’d argue that exploratory testing is valid only when testing a “black box”. That is, if the task at hand is to reverse-engineer and document a system for which the code cannot be directly inspected (protected either by license, patent, for example), then exploratory testing is a valuable and necessary activity.

But I often hear people use the phrase “exploratory testing” to mean “tell the QA people to use the software for a while and point out any holes or problems…then log any defects found and make recommendations about missing functionality”. (Let’s be clear, that’s not what “exploratory testing” means.) This activity is evidence of rudderless product ownership – analogy: like trying to drive a car while looking only in the rear-view mirror.

Would you agree with this assertion?


I offered the term “exploratory testing” without comment to see what happened. I do agree with your insight.

You made two distinctions that are meaningful to me:

1 - Exploratory testing answers a “question” of sort… or serves to formalize a hypothesis about the software (or how we intend to work on the software). Your use of the word “research” resonates with me in this regard.

Note that this is one way of looking at exploratory testing. Others may have different insights that are useful, too.


2 - Exploratory testing is not ad-hoc testing, as you alluded to. Such practice may indicate more than just a lack of direction in product ownership…


A provocative title :wink:

My take is, automated testing must be a part of it. Regression testing, and a fair part of new requirements, can in fact be automated. But, I also agree with comments already made - for ‘stupid user tricks’ and new stuff, and UI related work … there is still a role for manual QA.
And, QA MUST be incorporated with development cycle. 7 weeks lag time IS ridiculous, but that’s about that company and not what is generally considered appropriate.
We are only now about to start automating, and I can’t wait :slight_smile:


Hi Troy,

I’ve worked with many teams which didn’t have team members called “QA”. Some of those teams have some of the best QA practices I’ve ever seen!

Those teams stand out in my memory because QA was a behaviour, a practice, and attitude – not a job title. Quality is everybody’s responsibility – and not to be segregated from other activities. Every member of the team knew that quality was their road to success. So to achieve uncompromised quality:

  • 90+% of their code was covered by automated tests,
  • all but a few of those tests were written before code (TDD and ATDD),
  • almost zero lines of code were written by individuals (like 99+% of the code was written by pairs, trios, or mobs),
  • any manual QA activity was carefully scrutinized and led quickly to automation of that activity,
  • sometimes new code is written without tests – like experiments, prototypes, “research spikes” – but that code would be replaced entirely with test-driven code rather than promote untested code to production.

I’m sure others in this forum have similar experiences to share.


FYI: Since 3 years have passed, I’d like to note that I still basically agree with my original post. Though, the issue is more nuanced than I let on in 2016. I’d change my argument in this way:

I have come to appreciate the difference between “Testing” and “Checking”. Like @JayH referred to “does button A work”. (That’s checking.)