Back to the Basics - Sprint Review purpose sanity check


Alright all, I’m having a moment of imposter syndrome 12 years into this journey, I need a sanity check. My problem is that I’m used to working at a RI level for Scrum, and I’m in an environment that is between Shu and maybe Ha and have to rethink how I teach and communicate to those around me in the manager positions who have their own pre-biased views.

The current environment is a continuous deploy environment (think Facebook) and our users are internal employees. Our story DONE criteria includes that not only is the thing tested and deployed, but the criteria includes a launch plan which has a post deploy metrics and monitoring and acceptance feedback loop before moving the item to our DONE backlog.

This means that anything that goes to sprint review has already been seen by the entire team (small teams) and the PO, and the customers… anyone who might show up at sprint review.

Am I crazy in thinking that (this in a “formal scrum” environment) this makes the sprint review somewhat of a wasteful process as it’s defined at a high level in the 2017 Official Scrum Guide? There’s little need to redundantly review what was already reviewed and accepted along the way when the team and work is small enough that we all know it already.

The Ri voice in my head doesn’t really care… “do what works for the intended outcome” it screams! BUT, my current environment is doing something they call a review/demo (which is neither) and before I start coaching against that problem, I’m gut checking myself before they throw the red Jeff Sutherland scrum book (the “scripture” before I arrived) back at me and say they hired the wrong agile coach to teach them Scrum today, and Kanban/other agile later.

I’ve never been in a truly continuous instant deploy environment before and that’s what is making me think this through after watching it for a couple months…


Here is my take…

The purpose of the sprint review is for a chance to inspect adapt on what was delivered during that sprint. For your team it’s what was delivered to production it seems which is awesome! This event is setup so people outside of the team can have a chance to see the software in action and provide feedback on potential changes to the backlog, roadmap, etc after seeing it. It’s also an opportunity for the team to interact with those people.

So it’s not about the demo or “seeing” it, as it’s about people who don’t work together every day having a discussion about the future of what’s being developed, short and long term and making adjustments when needed.

If this is already happening then you may not need it, you can be the judge of that!


Yeah, that’s the crazy part… in this context, all the people who are coming to sprint review are already fully aware of and given feedback on everything before the stories close. As for feedback for the roadmap, backlog, etc, it’s teased out during this same process or at planning. The team is interacting with those people more here than any of my prior 5 agile transformations (and this is still low adoption maturity).

It just feels odd to think that the academic parts of the sprint review are either already covered or redundant with the retro and planning cycles (customers are part of planning too) and our done criteria.

That being said, there is a lot here still to be worked on… I’m just checking whether a sprint review could be waste if these other criteria are met? It feels weird to consider that possibility!


I’m sitting here thinking that it’s amazing that the team is able to do all of that for every story in a sprint. If my team’s definition of done included some external dependencies we would never be able to complete anything.

I also found myself wondering what the sprint goals for the team were. If the stories are so independent that they can get feedback individually then do they have a coherent sprint goal or is it just a random collection of things?

Sorry if my questions are off base. I’m probably misinterpreting the context of the situation.


In most cases the stories are both atomic and related. So they are sliced in a way to be shipped with each story adding layers of value on top of the prior, but the sprint tends to be around an area or cluster of stories that are related.

When you have a good product roadmap, the sprint goals are more about incremental and iterative product enhancements down that roadmap. When you don’t, this is more of a concern.

We also have the advantage of a UX team that does discovery before the sprints so that some of the feedback about if the thing being built is right or not can be flushed out with clickable prototypes. That closes the gap some too.


Seems redundant to me, unless there is some other value the team or stakeholders are getting out of the review, it seems like a lot of the communication of team progress can be communicated via email.

I am also assuming that there are methods outside of this review for users of the product give feedback.


Would it be possible to pivot the Review from a demo to a User Feedback session with real users giving feedback or replaying feedback sessions from users?

Is there value in focusing Review more towards where we are on our product backlog as a pre-cursor to Sprint Planning? Combined with Sprint Planning?

It feels like there is still value there, just not the right people are coming to the review.


I think of the end of Sprint ceremony, whatever we want to call it, where the team pauses and shows their work results, as an opportunity for the following five things to occur:

  1. To show working code and gain feedback on its correctness in meeting client needs.
  2. To show plans for upcoming (next) iterations/sprints to confirm directional soundness.
  3. To expose some of the challenges the team is facing (overcoming, needs help with, etc.) and engage the organization in assisting them.
  4. To expose how the team is approaching their continuous improvement efforts (growing, learning, improving, evolving, etc.)
  5. To foster organizational (broader audience) team awareness of the team’s progress & efforts and provide an opportunity for appreciation.

In other words:

  1. Results/Outcomes
  2. Plans/Coming Attractions
  3. Challenges/Impediments
  4. Improvement Trends & Learning
  5. Appreciations

IMHO, far too many focus only on #1 as the goal of the review/demo. I actually think it’s too big of a communication & transparency opportunity to simply focus on what was delivered.

It’s also a rich opportunity to explore other things, albeit briefly, for customers, stakeholders, and leaders.

I know this might fly in the face of the Scrum Guide guidance. So, I’m not disagreeing with it. I’m simply augmenting it based on my personal experience.

In the case of this continuous deployment context, #1 is occurring. But you might be missing out a bit on #2 - #5. And you might want to create an opportunity to occasionally explore those.

Simply food for thought…


Echoing much of what @bgalen shared, my personal analog for the Sprint Review & Demo is a kindergarten Show & Tell – where the Demo is the Show and the Tell is the Review (I invite feedback whether that’s an improper comparison or an over simplification).

From your description it sounds like all interested parties are getting the “show” part over the course of the sprint via the continuous deployment context. Maybe consider what more value could be mined from emphasizing the “tell” portion – tell the story of the team’s sprint in terms of what led to the results seen in the product:
What happened,
Why did it happen,
Walk through what led to that key product decision,
What did we learn about our product along the way,
What can users do now that they couldn’t before and why does that matter? (value & context)
Who did we engage with on that successful design,
Who do we publicly recognize for great work,
When we ran into this impediment here’s how we fixed it and reached this product outcome, and thanks to this person for their support in getting it resolved.

Some of this dialogue might overlap with what comes up in Retrospective - and I think that’s ok, and not unintentional. Use the Review as a mechanism to orient everyone - can we establish agreement that “yes, this IS what happened”? - then can we carry that forward into the team’s retro: “ok, we all agree this is what happened and why we think it did, lets focus on what we do next to improve?”

It might be a way to create some positive transparency into the process, and perhaps generate helpful insights from those outside the team that may not be visible to those who are too close to the work (trees/forest).


Yeah, this is exactly what I’m trending towards… there’s been a big focus on “Scrum by the book” before my arrival and I’m using this thread as a sanity check to tackle the dogma aspect of this conversation.

I like your 5 point summary approach to the meeting, and if I can win the first part of the dogma conversation, I can use this to further enable where I’d like to coach this weekly gathering towards…

  1. I’d file this under “happy problems” :slight_smile:
  2. @bgalen has hit the nail on the head with his 5 points
  3. I don’t think any of that ‘flies in the face’ of the Scrum Guide… You’re inspecting the increment, sharing future plans, and conceivably updating your Product Backlog based on all the above… Also, nothing in the guide says you can’t or shouldn’t also be doing those things throughout the sprint.

and a bonus point: how many sprint reviews in a row don’t result in updates to the product backlog? if that number gets high enough, maybe then you’ve got yourself a wasteful event?