Effective sprint reviews with multiple stakeholders


#1

How do you have effective sprint reviews with technical stakeholders and non-technical stakeholders?

Background
We’ve come across the following situation. We have internal stakeholders composed of both technical and non-technical members. We also have non-technical external stakeholders. We’re having trouble balancing the sprint review towards both audiences. If we cater to our non-technical stakeholders, then our technical stakeholders have questions that we don’t cover. On the other hand, if we cater to the technical stakholders then our non-technical stakeholders lose interest and we don’t get any good feedback.

Potential Solutions

  1. Have two sprint reviews (Don’t like this option)
  2. Prioritize stakeholders and cater to the priority (Better option)
  3. What else?

#2

Hmm, tough one. I get not wanting to do 2 sessions as the team could view it as overkill, but the motivator of getting that crucial non-technical feedback is powerful.

I’m in the same boat with the same problem. We have taken the approach of 2x-as-long reviews catering to both stakeholder groups. It’s a heavy bit of facilitation in the sessions but they appear to be working. The other problem we’ve now found is overlap, and you have multiple reviews in the same time box. Baby steps…


#3

Interesting – How do you coordinate this? Does everyone just join at the beginning and choose when to listen or do you say for example, at 3:00 we’ll be non-techy and 3:30 will be more in depth?


#4

Option A. The teams answer whatever questions come up but are mindful to not go too far down the rabbit hole in answering. The collective group makes sure we don’t get too technical, or that the stakeholders don’t start solutioning during the session. It’s a delicate balance and took a few tries to get it right.


#5

From a coaching experience, I find one way to assure that the right stakeholders are in the conversation is to first look to the outcome hypothesis and the leading indicators of the feature. It is hard for stakeholders to connect the dots to understand the value a team is delivering. If you look to the metrics that validate a business outcome, and construct those in partnership with business and teams, you have a much better idea of the who in the equation, and who should be in the demonstration. I ask my teams to validate the hypothesis as a part of their definition of done, and use that validation as the foundation of the demonstration to those persons that helped construct the hypothesis. The additional benefit of this approach is that teams begin to own the outcome, not just the work, which improves the relationship between the business and the team.


#6

I ask my teams to validate the hypothesis as a part of their definition of done, and use that validation as the foundation of the demonstration to those persons that helped construct the hypothesis.

Do they actually do this?

I think this is great advice for features that are based around an hypothesis; however, some teams may be dealing with foundation features, e.g. Ability for a user to manage their user profile. The users of the application want to see this working, but other scrum teams in the company want to see how they can use the API endpoints to integrate this into their our applications. I think both are valid stakeholders, but if i had to choose one over the other it would be the users of the applications.


#7

I would say it is more art than science at this point. I am finding with the teams in question they are getting better at it, but you probably are finding not always possible. (say about 50%) It is a good way to think at the very least.