Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

Allstacks believes that all team should measure the following metrics, broken into the following themes:

  • Completion: Roadmap Focus, Velocity, Sprint Attainment

  • Efficiency: Cycle Time, Lead Time, Issue Time in State, Coding Days, PR Response Time

  • Quality: Currently Open Defects by Priority, Defects Created Over Time

  • Readiness: Refined Backlog

Note: All of these metrics can be up-leveled in visualizations for engineering directors and VPs. Please reach out to your Allstacks representative with any questions.

Roadmap Focus

Why:

  • Understand how much of your team’s effort is going towards roadmap work vs maintenance and tasks.

  • Answer the question “How long do you think this will take?” with confidence knowing how focused you’ve historically been on planned work.

Goal:

  • Varies based on the team. True support teams often do very little roadmap work, but new product development teams should have a much higher ratio. If you see the ratio of uncategorized work increasing, meet with your team to determine what’s causing the distraction. Common issues include mounting bugs or tech-debt, or interrupting tasks ripe for automation.

Starting Metric:

  • Velocity, by issue count or story points.

Settings to Apply:

  • Group series by parent properties to represent your team’s roadmap parent type.

  • Filter out “Won’t Do” statuses via Advanced Filters

  • Utilize a pie chart and look at all work completed over the past 60 or 90 days.

Velocity

Why:

  • Understand if your team is consistently delivering based on their estimation and pointing.

Goal:

  • Consistency proportional to team size.

Starting Metric:

  • Velocity

    • Save a view by issue count (to avoid hiding unestimated items) and story points.

Settings to Apply:

  • Use a default view utilizing story points or issue count based on how your team estimates work.

  • Filter out “Won’t Do” type statuses using advanced filters.

  • Group by month to avoid the inconsistencies of weeks.

Sprint Attainment (for Scrum Teams)

Why:

  • Understand if your team is correctly committing to planned work.

  • Determine if interruptions are coming into your sprints after the start.

Goal:

  • Did we complete all of the issues committed to at the beginning of the sprint and meet our goal?

  • Threshold goals can vary, but a standard goal is between 80-90% committed work completed during the sprint.

Starting Metric:

  • Committed vs Completed

Settings to Apply:

  • Using the “Descendants of” selector, choose the individual sprints that your team wants to track.

Cycle Time

Why:

  • Understand if your team is consistently sizing and estimating work.

Goal:

  • Consistent upper-bound maximumns by issue type and story points.

    • For example, the smallest body of work the team creates, such as a sub-task, should have a median cycle time of under three days at most.

Starting Metric:

  • Cycle Time

Settings to Apply:

  • Series grouping to consider:

    • Story Points: Understand if the team is consistently pointing work of relative technical complexity and time.

    • Issue Type: Confirm the team has consistent definitions of work by issue type.

      • I.e. The smallest issue type (sub-tasks or tasks) should be work that is always completable within 2-3 days.

  • Aggregate by median to remove statistical outliers.

  • Filter out “Won’t Do” type states

Lead Time

Why:

  • Understand how quickly your team responds to high priority bugs, defects, and customer inquiries.

Goal:

  • Varies based on team social contracts and SLAs, but an example goal would be all P2 bugs should be resolved, on median, within five calendar days.

Starting Metric:

  • Issue Lead Time

Settings to Apply:

  • Filter out “Won’t Do” type issue/work item states.

  • Filter down to bugs, defects and/or customer requests.

  • Group by month or quarter.

  • Group series by priority/severity.

  • Aggregate by median to remove statistical outliers.

Issue Time in State

Why:

  • Analyze issue/work item process for bottlenecks

    • Finding median or average time in progress, wait states, etc.

      • For example, in progress states should often be under 24 hours. Make sure items aren’t bottlenecked waiting for QA resources, etc.

Goal:

  • Set an upper-bound per state. For example, our team should be sizing individual cards so that they don’t need to be in the active development state for more than two days, and our QA team has committed to picking up ready for QA cards within 36 hours.

Starting Metric:

  • Issue Time in State: Stacked Column Chart

Settings to Apply:

  • Aggregate by median

  • Group monthly

Refined Backlog Ready to Work

Why:

  • Determine the depth of refined work ready for the engineering team.

Goal:

  • Two to three sprints work of fully refined work.

Starting Metric:

  • Burndown: Remaining Issues by State (In Issue Count or Story point modes)

Settings to Apply:

  • Advanced filters to match returned cards to your team’s backlog.

Currently Open Defects by Priority

Why:

  • Understand how quality is impacted by your focus on efficiency and speed.

Goal:

  • Varies by team. Allstacks recommends setting upper limits on number of bugs by priority open per team at any given time.

Starting Metric:

  • Issue Creation

Settings to Apply:

  • Measure time from “First Activity to Present”

  • Filter out all resolved or done states.

  • Filter to just bugs/defects.

  • Filter to your team’s relevant backlog/ownership.

  • Apply “Include Disabled Team Members” flag

  • Group series by priority or severity.

Bug/Defect Creation Over Time

Why:

  • Understand if your teams and customers have been finding more defects, and what severity they are encountering over time.

Goal:

  • Fewer issues created of higher severity/priority.

Starting Metric:

  • Issue Creation: By Type

Settings to Apply:

  • Series: By priority/severity

  • Include Disabled Team Members filter

  • Filter to the relevant backlog of the team via Advanced Filters

Pull request response time

Why:

  • Older pull requests cause merge conflicts, and are more difficult to review after they are no longer top of mind.

Goal:

  • Between one-half and three days.

Starting Metric:

  • Pull Request Response Time

Settings to Apply:

  • Aggregate based on average instead of median.

Coding Days

Why:

  • Understand how frequently your team is able to commit code. Use as a proxy for development coding time availability.

Goal:

  • 3-4 average days

    • Typical product/feature based dev teams.

    • Can be less for teams with architects, data scientists, QA, devops

Starting Metric:

  • Coding Days

Settings to Apply:

  • None Needed

  • No labels