Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Roadmap Focus

Why:

  • Understand how much of your team’s effort is going towards roadmap work vs maintenance and tasks.

  • Answer the question “How long do you think this will take?” with confidence knowing how focused you’ve historically been on planned work.

Metric:

  • Velocity, by issue count or story points.

Settings to Apply:

  • Group series by parent properties to represent your team’s roadmap parent type.

  • Filter out “Won’t Do” statuses via Advanced Filters

  • Utilize a pie chart and look at all work completed over the past 60 or 90 days.

Velocity

Why:

  • Understand if your team is consistently delivering based on their estimation and pointing.

Metric:

  • Velocity

    • Save a view by issue count (to avoid hiding unestimated items) and story points.

Settings to Apply:

  • Use a default view utilizing story points or issue count based on how your team estimates work.

  • Filter out “Won’t Do” type statuses using advanced filters.

  • Group by month to avoid the inconsistencies of weeks.

Sprint Attainment (for Scrum Teams)

Why:

  • Understand if your team is correctly committing to planned work.

  • Determine if interruptions are coming into your sprints after the start.

Metric:

  • Committed vs Completed

Settings to Apply:

  • Using the “Descendants of” selector, choose the individual sprints that your team wants to track.

Goas to Consider:

  • Did we complete all of the issues committed to at the beginning of the sprint and meet our goal?

  • Threshold goals can vary, but a standard goal is between 80-90% committed work completed during the sprint.

Cycle Time

Why:

  • Understand if your team is consistently sizing and estimating work.

Metric:

  • Cycle Time

Settings to Apply:

  • Series grouping to consider:

    • Story Points: Understand if the team is consistently pointing work of relative technical complexity and time.

    • Issue Type: Confirm the team has consistent definitions of work by issue type.

      • I.e. The smallest issue type (sub-tasks or tasks) should be work that is always completable within 2-3 days.

  • Aggregate by median to remove statistical outliers.

  • Filter out “Won’t Do” type states

Lead Time

Why:

  • Understand how quickly your team responds to high priority bugs, defects, and customer inquiries.

Metric:

  • Issue Lead Time

Settings to Apply:

  • Filter out “Won’t Do” type issue/work item states.

  • Filter down to bugs, defects and/or customer requests.

  • Group by month or quarter.

  • Group series by priority/severity.

  • Aggregate by median to remove statistical outliers.

Issue Time in State

Why:

  • Analyze for bottlenecks

    • Finding median or average time in progress, wait states, etc.

      • For example, in progress states should often be under 24 hours. Make sure items aren’t bottlenecked waiting for QA resources, etc.

Metric:

  • Issue Time in State

Settings to Apply:

  • Aggregate by median

  • Group monthly

Defect/Bug creation over time - filter by backlog

Why:

Metric:

Settings to Apply:

  • Issue creation metric

    • By priority/severity

    • Filter to the relevant backlog of the team. 

Bouncebacks

Why:

Metric:

Settings to Apply:

  • Displays cards that have revisited previous states, like QA back to dev, or dev back to product requirements. 

    • Increase shows churn/uncertainty

      • Also sort by number of bouncebacks on cards. Good retro items

Coding days

Why:

Metric:

Settings to Apply:

  • Low coding days implies blockers towards team having heads-down time to write code

    • Where do we get 3-4 recommended days?

      • Developers should check code in daily. Most scrum teams have a day of meetings/refinements/plannings/etc, so it will almost never be 5

    • What kind of teams get 3-4 days?

      • Typical product/feature based dev teams.

      • Can be less for teams with architects, data scientists, QA, devops

Pull request response time

Why:

Metric:

Settings to Apply:

  • Tracks time to response on PRs, as well as approvals and time to merge

    • Use averages instead of medians (PRs skew overly positive on median)

    • Old pull requests get stale

    • Ideally between 1-3 days

Refined backlog ready to work

Why:

Metric:

Settings to Apply:

  • Try burndown with filters to match team’s backlog

    • Pick an issue state that represents refined work.

    • By story points

Open defects by priority

Why:

  • Understand how quality is impacted by your focus on efficiency and speed.

Metric:

  • Issue Creation

Settings to Apply:

  • Measure time from “First Activity to Present”

  • Filter out all resolved or done states.

  • Filter to just bugs/defects.

  • Filter to your team’s relevant backlog/ownership.

  • Apply “Include Disabled Team Members” flag

  • Group series by priority or severity.

  • No labels