Measuring the cumulative value delivered for a feature?


How do your teams measure the cumulative value delivered by each story for a feature?

Take, as an example, a story mapping session is done for a feature that generates 100 stories. As teams deliver these stories, the first handful of stories would/should deliver the most value. As iterations go on, the value that the stories deliver would theoretically be less. It’s almost as if the value reaches a limit.

I ask because I remember reading or hearing about how once you reach about 80% of that upper-value limit, it’s probably a good idea to move onto the next feature. Is something that people are actually doing? If so, how are they are doing it?


I’ve been recommending a product backlog burnup chart. See the lower chart on this blog post:

I count the stories in a given feature set. At intervals (iterations or a cadence) update the chart. I use a spreadsheet.


Also a fan of the release and/or feature burnup. When I read 100 stories I though WOW that’s BIG! But my reaction actually made me think that if I had something that big, I would probably split it, several times. In doing that I would probably end up with multiple Features, several of which, like the editor of a movie, I would leave on the cutting room floor. By the time I got around to them, something smaller and more valuable would jump ahead of them on the backlog. And I believe that any feature/story that has been on the backlog for a while and not done, will never be done, and gets deleted.

I guess then that my answer is, don’t have features so big that you can’t tell when enough has been done?


The number 100 was just an easy number to use for example purposes. I would agree that 100 stories would be a lot for any single epic and there is most likely a way to split or break down that epic further if that’s the case.

After speaking with a few colleagues at work I came to the conclusion that it’s difficult and not worth it to track the business value (like NPV) down to the story level. It’s better to do this at an epic level.

I think the way I phrased my original question was not clear. Using the same example of an epic with 100 stories, how do we test the assumption that we need all 100 stories before moving on to the next epic?

The answer I have right now is that the product owner would have to make this decision using analytics and customer feedback, but is anyone doing this through other means?


IMO, regardless of the size of the feature, I’d stop working on it when the next thing to deliver for that feature has a lower ROI (or return vs. team effort) than work related to another feature. Our goal is to maximize the amount of work not done. We want the smallest amount of output possible that results in the largest / best customer outcomes that ultimately deliver the business impact we’re looking for. Our goal is to always try to achieve that through the work we select to do.