How does Allstacks use Allstacks?

Best practices from the perspective of Product Management leaders 

Graham Langdon, our Director of Product, uses Allstacks in several key areas:

  • Sprint reviews

  • Sprint plannings

  • Sprint retros

  • In partnership with the product team to ensure prioritized work is refined and scheduled

 

Top Metrics We Use:

1. Say/Do Percentages

Metric View: Committed vs. Completed | Say vs. Do Percent

 

General Filters used:

  • Descendants of: we select the appropriate Jira project to pull in sprints for a specific team.

  • Advanced Filters: we exclude Issue Type: subtasks.

 

  1. Review this metric during sprint reviews to share predictability trends with stakeholders

  2. Drill down into predictability for item types that have SLAs attached

  3. Questions to answer: 

    1. Are we getting more or less predictable with the work from our roadmap?

    2. If we are seeing predictability challenges, is it because the overall initiative is more complicated than we expected or are we inundated with interrupts (or something else)?

    3. Are we staying reliable, i.e. consistently fulfilling our SLAs?

 

 

2. Story Points Completed 

Metric Views: ‘Committed vs. Completed: Story Points’

‘Committed vs. Completed: Issues’

 

General Filters used:

  • Descendants of: we select the appropriate Jira project to pull in sprints for a specific team.

  • Advanced Filters: we exclude Issue Type: subtasks.

 

  1. Review these metrics during retros

    1. Committed vs. Completed: Story Points - pointed items tend to be pre-planned, roadmap-oriented work. Tracking our commitment and completion of pointed work indicates our ability to plan and execute on our roadmap.

    2. Committed vs. Completed: Issues - for Allstacks, this includes bugs, tasks, and excludes sub-tasks. This exposes the impact of interruptions from unplanned work as those items tend to come in unpointed. 

  2. Questions to answer:

    1. Are we biting off more than we can chew?

    2. Over time, are we committing to more things despite our historical track record indicating we deliver work at a consistent pace and in similar quantities?

    3. Do we need to go into our next sprint planning with a new target number for our commitment and be careful not to exceed it?

 

 

3. Refined Work Queued Up for Sprints

Metric View: Committed vs. Completed: Story Points

 

General Filters used:

  • Issues Active Between: Select a date range that includes the current week and ends on a future date. This brings in any future sprints into the metric view. (Note that the future sprints need to have start and end dates set in Jira to show up in the metric)

  • Descendants of: we select the appropriate Jira project to pull in sprints for a specific team.

  • Advanced Filters: we exclude Issue Type: subtasks.

  1. These metrics help with sprint planning. This allows you to see what’s currently planned + upcoming. 

  2. Allstacks uses 2 variations of this metric:

    1. Refined work 

      1. Our target is to have refined work queued up 3 sprints out with 15-25 points per sprint, depending on the team. This metric shows us if we’re on track.

      2. This metric also exposes if we’ve been pushing things off to the next sprint and, as a result, we’ve overloaded future sprints that need to get cleaned up.

      3. If future sprints are overloaded because previously planned work has been pushed to other sprints, it likely means our roadmap needs adjustment – which is always a fun activity :).

    2. Bugs - we have a target of at least 2 P4’s per sprint. 

      1. We don’t want to solely respond to interrupting bugs. We want to be proactively burning down our backlog of bugs. This metric helps us stay accountable to prioritizing and squashing bugs from our backlog.

 

 

4. Refined work backlog 

Metric View: Burndown: Open Story Points

 

General Filters used:

  • Descendants of: we select the appropriate Jira project to pull in sprints for a specific team.

  • Advanced Filters: we exclude Issue Type: subtasks.

  1. We have a threshold of 100-200 points. 

  2. We don’t want to get below 100 because we need enough refined work to have the next few sprints pre-loaded. 100 points is enough to load those future sprints and have plenty of leftover in case we realize some of the things we’d planned to do are no longer as pressing as we once thought.

  3. However, we don’t want to go over 200 points because there might be work that we’ve refined, but we don’t remember the exact context. If we have a surplus of refined work, we can reduce our refinement meetings and use it as a signal for the product team to go back, look at the oldest refined tickets, and check to see if they’re still valid.

 

 

5. Work Completed/Velocity by Strategic Theme

At Allstacks, Themes are containers for Initiatives. Initiatives are containers for epics.

  • Our subtasks equate to about a day's worth of work. If a body of work is going to take more than a day, we break it down to subtasks.

 

Metric View: Velocity: By Team, All Assignees

 

Under advanced filters, we’re limiting the results to items with a “Done” status so we don’t count any canceled items, and we’re excluding subtasks and parent types such as epics.

  1. This shows us where we’re investing at the highest level for our stakeholders: how much are we investing towards product roadmap work vs. technical roadmap work vs. unplanned work. We review two different views of this information:

a. Excluding subtasks so that only shipped incremental work is counted in each category. This shows stakeholders the ratio of items delivered by theme.

b. Including subtasks so that the daily activity in subtasks counts toward the ratios. This provides a better view into where our day-to-day effort is going. Larger stories will only count as one in the previous metric that only looks at delivered increments, but those same large stories might have dozens of subtasks representing weeks of work to get to the delivered story.

 

6. Cycle Time by Story Points

 

  1. Some people estimate in hours, others by level of effort. Allstacks tries to estimate by complexity.

  2. Each individual team’s points will vary. The question is whether your team is consistent in what they consider to be a “5” or an “8,” etc.

  3. From a Product Management perspective, if something has points on it, it’s gone through refinement and is ready for the team to work on. Trends in commitment completion will affect your team’s appetite to take on stories with higher point values during sprint planning, therefore anticipating your team’s appetite is very useful for Product Managers going into sprint planning.

 

7. Bugs 

a. P3 backlog (keep track of threshold. If we go over, we’ll need to plan a bug squash)

b. P4 backlog (target < threshold by end of Q2)

c. Bugs Fixed (excluding “won’t do’s”)

d. P3’s Committed vs. Completed by Sprints

e. P5’s Remaining

f. Bugs Created vs. Completed

g. Customer Feature Request tickets

 

8. Miscellaneous Metrics

a. Commit Volume - activity indicator

i. Sometimes, this acts as a good indicator of whether the team is in more of a planning/spiking mode vs a delivery mode. Low commit volume correlates to planning/exploration while higher volumes indicate the team is on a roll and getting things across the line.

b. Burndown: Open Story Points  - another way to view the burndown report/ portfolio report. Select specific Milestone in ‘descendants of’ picker.

i. This lets me see for specific high priority milestones how much scope is refined and whether they’re burning down as expected or stalled so I can react accordingly.

c. Uncategorized Stories - target: make sure there are no uncategorized stories

i. This helps us keep our Jira hygiene high so that our reporting in Allstacks is as effective as possible.

d. Open Pull Requests - anticipate impending bottlenecks

i. If the number of open pull requests gets too high it’s likely we’ll lose a day or two that sprint just getting caught up on code review

e. Issues Open by State - anticipate impending bottlenecks

i. If items in progress (or in QA or Code Review, etc) are significantly higher than the norm, we’re probably going to lose time in the sprint getting re-balanced. Some of the started work might not get across the line this sprint.

Related pages