ASK Daniel Vacanti


#1

Please use this topic thread to post your questions directly to Daniel Vacanti! Dan will be answering YOUR questions October 22th-26th 2018.


#2

Hi Daniel,

Metrics are a funny one, the same metric can be used to support a team or tear it down or apart. How do you know a metric is doing more harm then good? What should someone look out for when working with metrics?

Cheers


#3

In your book Actionable Agile Metrics…you indicate that typical agile tools’ “Control Charts” are either not helpful or accurate, but i couldn’t find where you explain why. Can you please summarize why these are not good tools for teams? I’m trying to incorporate better metrics for service-based teams. Thanks!


#4

First, we have to be very careful around how we define what a “Control Chart” is. Spoiler alert: what you see in Jira that is labelled “Control Chart” is definitely NOT a Control Chart. In fact, most everything you see in the Agile community (especially the Kanban community) that is labelled a Control Chart is definitely not a Control Chart. The best modern reference on Control Charts and how to construct them (without going back and reading the original Shewhart or Deming) is a guy by the name of Donald Wheeler. (His book on SPC can be found here.

The fundamental thing you have to know is that simply constructing a Cycle Time Scatterplot and then overlaying mean and standard deviation lines is one of the classic mistakes when it comes to building a proper control chart. Wheeler has a couple of excellent examples why here and here. The actual process of creating a CC is much more involved. An correct example of how to do it would be way too long for our purposes here so I’ll leave it to you to check out the references to Wheeler for more information.

Getting back to the question at hand, it’s not that CC’s are inherently bad (if constructed properly) it’s just that I’m not convinced of their applicability to knowledge work–especially knowledge work in the complex product development domain. To be clear, I’m not saying they are not applicable; I’m just saying I’m not convinced.

What I am convinced of is that either way there is a much easier approach. That approach is to use a basic Cycle Time Scatterplot and overlay standard percentile lines (not Standard Deviation lines!). These types of charts are much easier to build, much easier to explain, and most importantly give a tremendous amount of actionable information. More information on why this is can be found in Chapters 10,11,12 of my book, “Actionable Agile Metrics for Predictability”.

One last thing: while proper CC’s are not necessarily bad, what Jira calls a CC (I think TFS is guilty of this too) is inherently evil. It’s evil because of underlying assumptions they are making that are simply not true (e.g., that you your Cycle Time data is normally distributed). Because you process data doesn’t match their assumptions, then you are either going to get a signal that something is wrong when it is not; or, more likely, you are not going to get a signal that something is wrong when it actually is.

I hope this helps, but please let me know if there is any other information that I can provide.

All the best and happy metric-ing!


#5

Hey Dan,
A common issue with metrics is data integrity. Often with these systems I’ve found users will move tickets to active before they’ve actually started working on them, or worst, move them from one state to another then realise it shouldn’t have and return it to the old state (giving it a new start date).

What measures have you found effective at ensuring that the start and end dates pulled out your ticketing system are accurate?


#6

Hi Brad,

Yes, any metric can be weaponized. And/or gamed. I have a good answer for the second one. But not a good answer for the first.

In terms of not using metrics to tear a team apart, this is a culture thing. Managment need to understand that these metrics (at least the metrics I talk about anyway) are for the team to learn and improve. They are not there to be used as a whip (no pun intended). If teams are allowed to improve on their own (with coaching from time to time) then predictability will follow naturally. The second metrics become a target with punishment for noncompliance then you’ve lost.

In terms of game, one objection I always here are things like “you can make Cycle Time and Throughput look better just by breaking up your items to be smaller”. Exactly! If I was going to have you “game” the system, then that is exactly how I would have you do it. As long as each item still independently delivers customer value.

I apologize for not having a better answer for you, but you will run into these types of problems no matter what metric you use. My hope is teams can ignore the noise and get on with doing the proper, value-add work that they are supposed to.

Thanks for the question!


#7

Just a very quick shoutout to Frank Vega who is the one who put me on to the Wheeler references and how Control Charts should be created. If anyone has more questions about this, then Frank is an invaluable resource.


#8

Hi Garth,

You are right. A common issue is data integrity. The short answer to “what can we do with our tooling to ensure dates are accurate” is not much. Data integrity is almost always a coaching thing and not a tooling thing. It the classic Garbage In Garbage Out problem. If teams aren’t using Jira or Trello or TFS or whatever properly, then there isn’t really any thing we can do to make up for that. In my book “Actionable Agile Metrics for Predictability” I go into some simple algorithms that can help to clean up data (e.g., items moving backward, how to handle skipped columns, etc.) but the truth is that data generation is very context specific any any data cleanup that you try to do without understanding the context in which it was generated will usually make things worse.

What I can say is it is can be very easy to spot bad data in the tool if you suspect you have integrity problems. How to do that would be a whole post in itself, so I can’t go into too much detail here. One little nugget I can give you, though, is that in general I am not a fan of the Flow Efficiency metric. However, if a team comes to me and tells that they have a 90% flow efficiency, then my initial reaction is they have a data integrity problem. They are probably engaging in the behviours you mention above and/or aren’t capture the right data (e.g., they are not capturing any blocked time data).

I hope this helps!