Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Allstacks believes that all team should measure the following metrics, broken into the following themes:

  • Completion: Roadmap Focus, Velocity, Sprint Attainment

  • Efficiency: Cycle Time, Lead Time, Issue Time in State, Coding Days, PR Response Time

  • Quality: Currently Open Defects by Priority, Defects Created Over Time

  • Readiness: Refined Backlog

Note: All of these metrics can be up-leveled in visualizations for engineering directors and VPs. Please reach out to your Allstacks representative with any questions.

Table of Contents

Completion:

Roadmap Focus

...

Why:

  • Understand how much of your team’s effort is going towards roadmap work vs maintenance and tasks.

  • Answer the question “How long do you think this will take?” with confidence knowing how focused you’ve historically been on planned work.

Goal:

  • Varies based on the team. True support teams often do very little roadmap work, but new product development teams should have a much higher ratio. If you see the ratio of uncategorized work increasing, meet with your team to determine what’s causing the distraction. Common issues include mounting bugs or tech-debt, or interrupting tasks ripe for automation.

Starting Metric:

  • Velocity, by issue count or story points.

Settings to Apply:

  • Group series by parent properties to represent your team’s roadmap parent type.

  • Filter out “Won’t Do” statuses via Advanced Filters

  • Utilize a pie chart and look at all work completed over the past 60 or 90 days.

Velocity

...

Why:

  • Understand if your team is consistently delivering based on their estimation and pointing.

Goal:

  • Consistency proportional to team size.

Starting Metric:

  • Velocity

    • Save a view by issue count (to avoid hiding unestimated items) and story points.

Settings to Apply:

  • Use a default view utilizing story points or issue count based on how your team estimates work.

  • Filter out “Won’t Do” type statuses using advanced filters.

  • Group by month to

  • Base goal of the metric: consistency of delivery and estimation. Do we consistently deliver 20 points, and are 20 points always roughly the same? 

  • Based issue count and type: So you get more holistic views of everything the team has done historically.

  • Other cases:

    • Normalizing across teams

      • Ratios normalize disparate estimation processes.

    • Different team types (QA,vs dev vs product)

      • Dev - assignee based velocity

      • QA - If they are assigned, historic assignee velocity

      • Product is going to be more focused on roadmap focus, capacity, historic allocation of the dev team.

  • By points, by estimation method

  • Determine consistency of child delivery and ratios

  • Defects vs new product

  • Roadmap focus, etc

Sprint Attainment (If scrum)

Why:

Metric:

  • Committed vs Completed

Settings to Apply:

  • Base goal: avoid the inconsistencies of weeks.

Sprint Attainment (for Scrum Teams)

...

Why:

  • Understand if your team is correctly committing to planned work.

  • Determine if interruptions are coming into your sprints after the start.

Goal:

  • Did we complete all of the issues committed to at the beginning of the sprint and meet our goal?

  • Threshold goals can vary, but a standard goal is between 80-90% committed work completed during the sprint.

Starting Metric:

  • Committed vs

    Committed

    Completed

    • Filter by project or individual sprint names. 

  • Added vs Added Completed

  • Percentage changes over time

Cycle Time

Why:

Metric:

Settings to Apply:

  • Goal: Consistent timely completion of relatively sized work.

    • By issue type 

    • Or by story points

    • “Finding your five”

    • Consistent definition of issue types

    • Smallest issue type should be under 3 days at most.

    • What about weekends/non-working times?

      • Set thresholds slightly higher than the goal to call out truly stalled items. 

        • Set goals based on calendar time, not working time. 

Lead Time

Why:

Metric:

Settings to Apply:

  • Tracks from creation to completion

    • For defects and customer requests

      • Group by priority / SLA commitments

Issue Time in State

Why:

Metric:

Settings to Apply:

...

Settings to Apply:

  • Using the “Descendants of” selector, choose the individual sprints that your team wants to track.

Efficiency:

Cycle Time

...

Why:

  • Understand if your team is consistently sizing and estimating work.

Goal:

  • Consistent upper-bound maximumns by issue type and story points.

    • For example, the smallest body of work the team creates, such as a sub-task, should have a median cycle time of under three days at most.

Starting Metric:

  • Cycle Time

Settings to Apply:

  • Series grouping to consider:

    • Story Points: Understand if the team is consistently pointing work of relative technical complexity and time.

    • Issue Type: Confirm the team has consistent definitions of work by issue type.

      • I.e. The smallest issue type (sub-tasks or tasks) should be work that is always completable within 2-3 days.

  • Aggregate by median to remove statistical outliers.

  • Filter out “Won’t Do” type states

Lead Time

...

Why:

  • Understand how quickly your team responds to high priority bugs, defects, and customer inquiries.

Goal:

  • Varies based on team social contracts and SLAs, but an example goal would be all P2 bugs should be resolved, on median, within five calendar days.

Starting Metric:

  • Issue Lead Time

Settings to Apply:

  • Filter out “Won’t Do” type issue/work item states.

  • Filter down to bugs, defects and/or customer requests.

  • Group by month or quarter.

  • Group series by priority/severity.

  • Aggregate by median to remove statistical outliers.

Issue Time in State

...

Why:

  • Analyze issue/work item process for bottlenecks

    • Finding median or average time in progress, wait states, etc.

      • For example, in progress states should often be under 24 hours. Make sure items aren’t bottlenecked waiting for QA resources, etc.

Defect/Bug creation over time - filter by backlog

Why:

Metric:Goal:

  • Set an upper-bound per state. For example, our team should be sizing individual cards so that they don’t need to be in the active development state for more than two days, and our QA team has committed to picking up ready for QA cards within 36 hours.

Starting Metric:

  • Issue Time in State: Stacked Column Chart

Settings to Apply:

  • Issue creation metric

    • By priority/severity

    • Filter to the relevant backlog of the team. 

Bouncebacks

Why:

Metric:

Settings to Apply:

Displays cards that have revisited previous states, like QA back to dev, or dev back to product requirements. 

Increase shows churn/uncertainty

...

Also sort by number of bouncebacks on cards. Good retro items

Coding days

Why:

Metric:

Settings to Apply:

  • Low coding days implies blockers towards team having heads-down time to write code

  • Where do we get 3-4 recommended days?

    • Developers should check code in daily. Most scrum teams have a day of meetings/refinements/plannings/etc, so it will almost never be 5

  • What kind of teams get 3-4 days?

    Aggregate by median

  • Group monthly

Pull request response time

...

Why:

  • Older pull requests cause merge conflicts, and are more difficult to review after they are no longer top of mind.

Goal:

  • Between one-half and three days.

Starting Metric:

  • Pull Request Response Time

Settings to Apply:

  • Aggregate based on average instead of median.

Coding Days

...

Why:

  • Understand how frequently your team is able to commit code. Use as a proxy for development coding time availability.

Goal:

  • 3-4 average days

    • Typical product/feature based dev teams.

    • Can be less for teams with architects, data scientists, QA, devops

Pull request response time

WhyStarting Metric:Metric:

  • Coding Days

Settings to Apply:

  • Tracks time to response on PRs, as well as approvals and time to merge

    • Use averages instead of medians (PRs skew overly positive on median)

    • Old pull requests get stale

    • Ideally between 1-3 days

Refined backlog ready to work

Why:

Metric:

Settings to Apply:

  • Try burndown with filters to match team’s backlog

    • Pick an issue state that represents refined work.

    • By story points

...

  • None Needed

Quality:

Currently Open Defects by Priority

...

Why:

  • Understand how quality is impacted by your focus on efficiency and speed.

Goal:

  • Varies by team. Allstacks recommends setting upper limits on number of bugs by priority open per team at any given time.

Starting Metric:

  • Issue Creation

Settings to Apply:

  • Measure time from “First Activity to Present”

  • Filter out all resolved or done states.

  • Filter to just bugs/defects.

  • Filter to your team’s relevant backlog/ownership.

  • Apply “Include Disabled Team Members” flag

  • Group series by priority or severity.

Bug/Defect Creation Over Time

...

Why:

  • Understand if your teams and customers have been finding more defects, and what severity they are encountering over time.

Goal:

  • Fewer issues created of higher severity/priority.

Starting Metric:

  • Issue Creation: By Type

Settings to Apply:

  • Series: By priority/severity

  • Include Disabled Team Members filter

  • Filter to the relevant backlog of the team via Advanced Filters

Readiness:

Refined Backlog Ready to Work

...

Why:

  • Determine the depth of refined work ready for the engineering team.

Goal:

  • Two to three sprints work of fully refined work.

Starting Metric:

  • Burndown: Remaining Issues by State (In Issue Count or Story point modes)

Settings to Apply:

  • Advanced filters to match returned cards to your team’s backlog.