Tracking PI Objectives in a tool?


For those of you/us doing SAFe…how do you track your PI objectives to completion? Particularly in a tool (Rally/AgileCentral, VersionOne, etc.). We have multiple enabler teams and ARTs all running and I’m looking for a programmatic way to get a temperature check about where we stand, basically on demand (not status, but yes status :slight_smile: ). Ideas?


We are doing a test of Jira Portfolio (add-on built by Atlassian)… still learning, but you might want to review that.

(we are just starting to dabble with SAFe patterns here, so I’m a newb)


thanks, I’ll take a gander. We’re bought into Rally/Agile Central space but thematically it’s the same thing.


Ok this is sort of a plug but I just had to respond (I work for CA…or soon to be Broadcom). If your going to Agile 2018 check out the CA booth for the latest work being done by the CA Agile Central team. It’s focused around objectives. Nothing prime time yet however but I know the teams working it would love feedback from awesome folks like yourselves.


^ My experience w Portfolio:

  • Works at scale
  • Does a decent temperate check based on “yesterday’s weather” - provided you have at least a PI’s worth of history across all ART/value streams.
  • Adequate visualization of demand vs capacity
  • Good tools for doing “What if” scenarios
  • Steep learning curve for admins
  • UI / configs still really rough to get a handle on; having a dedicated Jira Portfolio Admin helpful
  • Ability to share info outside of JIRA (export PNGs for the dreaded PPT) is very limited


And here I was starting to think I was just stupid!!! (I’m one of two being asked to set this up)


Def not just you. Not intuitive at all. Configs all over the place. All of them matter. Generally frustrating at beginning. I found it useful to RTFM. still have to look stuff up all the time.


I read the ENTIRE online doc/manual from Atlassian and it didn’t go deep enough (it seemed more high level sales support than detailed for admin config). If you are aware of anything I should be looking at besides YouTube videos by 3rd parties, let me know!


I mean…should I be laughing like I am or nah? :smiley:


Hey, some of us do RTFM !!!


Also… it wasn’t that deep of a manual.


Part of the problem!!!


Hello there,

I have the same problem with my trains, getting the PI objectives into a living format to simplify conversation and improve transparency. In the past, we have listed PI objectives in the release timebox for each team, then for the program as a whole. This is a good start, but still a static document. From there, we cheated and created a single user story with the tasks as individual objectives. This is working well, but you do get that user story with is not value. This is working well, but the bigger solution is that CA is ready to release an Objective work item, which should help to solve this problem going forward. I think it is going to be released to cloud solutions on 9/18


VersionOne Lifecycle has the red strings on the PI board and supports PI objectives. I haven’t seen better, tool-wise. Paper beats tech for colocated teams, though - so much can be done creatively to model the problem space. But card data is needed for some stuff, e.g.regulatory audit trails, tracing code changes to backlog items and defects, etc.


While I must say that I am excited to hear that CA Agile Central is looking for a means to incorporate PI Objectives more closely into the tool, I do think we need to be careful about what we are measuring here…
PI Objectives and their related Business Value (Planned vs Actual) are a means to glimpse into a team and trains Predictability. In other words, how well did the system plan and execute to that plan. They also give high level guidance to the team during a PI to help prioritize work that comes up… (is it more important than this other thing which is going toward the PI Objective we all said we were going for at the beginning of the PI?). As such, it is less about ‘tracking to completion’ of the Objectives but more about how well we understand our plans, how well the system respects this planning, and how well we keep this in mind as we strive to deliver the value intended. This is by its nature a more ‘static’ measure. Of course, over time, you can develop trends that help see if you are getting better or worse at your over all predictability. If you are trying to track completion, this would suggest that you have some sense of what ‘complete’ is… i.e. acceptance criteria and probably a definition of done… that would seem to fit less with what is considered PI Objectives (which may represent the aggregate of multiple Features) and more like a Feature or Epic itself… if we are simply copying over our committed Features into the PI Objectives, this seems redundant and largely un-needed. Just track the Features to completion - use things like the Release Tracking tools or Portfolio Timeline. Or set Milestones around key integration points (the system demos?). Then track the ‘completion’ of necessary behaviors to that. PI Objectives are a different beast entirely. See “Inputs and Outputs of PI Planning” -
and Program Predictability Measure under SAFe Metrics:
for more of the nuanced differences between Portfolio Items like Features and Capabilities versus PI Objectives.