Strategies to convince the suits to spend time for "Sharpening the Axe"


Continuing the discussion from Developing people in agile teams:

As the topic is of interest to several people, I have spawned off this thread. It is getting back to the question raised by @andycleff, how to convince management (the suits) to spend significant time in development teams for self-improvement (sharpening the axe).

I am looking forward to hear back from you and your interesting suggestions!

Developing people in agile teams

The suits love metrics. What can we “measure” to show them the value of sharpening the axe?


Suits love metrics, which are related to output, as it helps them to track progress.

Sharpening the Axe is addressing outcome. Those metrics are much more subtle. You need to be a good marketeers to sell them to the suits.

To give an example: Happiness is a great metric to challenge personal development. If people can do what they can do best, if I support them in becoming better in these things, they will be happy.

Happiness is a key driver to improve outcome. But will the suits accept to invest time and effort into sharpening the axe because I can show them that people become more happy?

They will - but only if I tell them the story. The narrative has to involve them and make them become part of the story. This becomes obvious, if the suits understand that they are happy for the same reason, which applies to everyone else in an organisation.


The happiness metric! One of my favorites…

@hdietrich I’ve been looking for various instruments… niko niko unfortunately folded at the end of 2016… that was so promising.

I’m test flighting mood app

What are you using add some data to your story telling?


On the topic of metrics I think code metrics can go a long way to show the results of “sharpening the axe.” I’ve been writing about static analysis and code metrics quite a bit lately (preparing a talk about them). Code metrics are often hard to talk about outside of the technical organization.

The latest version of NDepend addresses this by adding formulas that associate a cost (in time or money) with specific code smells, issues in the code base, etc. More information about that feature can be found here:

It’s an interesting way to quantify the cost of not improving (or to show the return on a training investment) and sharing that outside of the engineering organization. These formulas can also be modified and customized. Unfortunately NDepend is strictly a .NET static analysis tool. But similar formulas could be used in other analysis tools.


We have established a team health check across all teams, which also helps in arguing with the managers on where to put efforts. Even though it is meant to have a kind of retrospective in the team on their own performance to challenge the ambitions, it works pretty good that way as well.

Anyhow, we are doing this for a couple of month now. I am not sure if the current momentum can be kept over a longer period of time.


I have been beating on the ‘Outcomes over outputs’ argument at my current engagement, and I am still amazed by executive level folks asking to see burn downs as a measure of progress. Its clear there is a lot of room to coach improvements.

One outcome I try to promote as “small batch increments”. Yes we can coach smaller stories (or teams can simply use smaller story points - an example of teams molding to what is measured), but curious what else we could use to measure the behavior of small batch.

I am looking at deployments throughout the week / day. My assumption is that if teams are deploying something testable at least once a day - they are thinking through how they will deliver something of value, even if a story is not deemed ‘done’.

Does anyone have other ways to look at this? Or are there flaws with using a deployment metric to tie back to the small batch principle?


I think measuring deployments per week/day/hour is a good start, and I always encourage it. However, the “are they valuable deployments” gets kind of lost in there. Especially if your team is using feature flags. Feature flags are great, and again highly recommended. But they also let you off the hook in terms of actually providing value with each story that you deploy. It lets you ship a bunch of non-functioning technical stuff that may or may not ever get used, and at that point, I always ask “are you REALLY shipping small batch increments”?

So i’m not sure where I’m going with this, but perhaps it’s something like Deployments/per day where what ever was delivered has some kind of customer usage (and the value that goes with it). Perhaps supported by analytics/logs?

Henrik Kniberg has a great talk somewhere on youtube where he talks about your DoD really including “customer problem solved in a way that is better than their current solution” (paraphrasing)…

Definitely agree though, it always seems to be a long an hard slog to get to “Outcomes over outputs”… Maybe it’s because the words sound so similar :stuck_out_tongue:


@hdietrich Are you doing the team health check model on a monthly basis then? My experience with this is doing it quarterly. They start out awesome…lots of energy, lot of good conversations, health debates, etc. After 3-4 quarters of doing this the energy of the sessions wane and grumblings start about the usefulness of this concept. So I think the teams are getting burnt out on it and that’s a quarterly cadence. If you are doing it on a monthly cadence I think that burn out could potentially happen even sooner. Then again perhaps there are ways to prevent that burn-out from happening. Which now has got me thinking how to prevent the burn-out! lol

This method did work for helping managers and management level folks see areas where the team could use improvement via ‘sharpening the axe’ type activities and therefore added high value early on. Management didn’t get burnt out on this concept nearly as fast as the teams did. I left the company after about 5 quarters of this so I cannot say how things turned out beyond that.



Confirmed - after 4 iterations the tension is gone. Anyhow, it did the job to convince management that change is happening on various levels. It sometimes feel like you just need to give them a chart with some horrible figures, but some positive trend, and they will invest in becoming better :wink:


SonarQube is a free analysis tool we’ve had great success with. There are analysers for most languages as well as the ability to create custom metrics.


Invariably there are moments in a year where one week for teams to sharpen axes is justifiable. I used to do this during the time when strategic planning was taking place, as priorities tended to shift as a result. It was pretty much a Hackathon type of event; I called it SHARK Week; Sharpening, Honing And Retooling Kanban Week. Teams did things like learn, find better SQL source code control, improvements to CI/CD pipeline, e.g. Teams had to make a case for the improved efficiency. Granted, I flew cover as a VP of Product Development. I didn’t need convincing, but my up-line did. Demo time was amazing.