Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Roadmap Focus

Velocity

  • Base goal of the metric: consistency of delivery and estimation. Do we consistently deliver 20 points, and are 20 points always roughly the same? 

  • Based issue count and type: So you get more holistic views of everything the team has done historically.

  • Other cases:

    • Normalizing across teams

      • Ratios normalize disparate estimation processes.

    • Different team types (QA,vs dev vs product)

      • Dev - assignee based velocity

      • QA - If they are assigned, historic assignee velocity

      • Product is going to be more focused on roadmap focus, capacity, historic allocation of the dev team.

  • By points, by estimation method

  • Determine consistency of child delivery and ratios

    • Defects vs new product

    • Roadmap focus, etc

Sprint Attainment (If scrum)

  • Base goal: Did we complete all of the issues committed to at the beginning of the sprint and meet our goal?

    • Threshold goal is between 80-90%

    • Committed vs Committed Completed

      • Filter by project or individual sprint names. 

    • Added vs Added Completed

    • Percentage changes over time

Cycle Time

  • Goal: Consistent timely completion of relatively sized work.

    • By issue type 

    • Or by story points

    • “Finding your five”

    • Consistent definition of issue types

    • Smallest issue type should be under 3 days at most.

    • What about weekends/non-working times?

      • Set thresholds slightly higher than the goal to call out truly stalled items. 

        • Set goals based on calendar time, not working time. 

Issue Time in State

  • Analyze for bottlenecks

    • Finding median or average time in progress, wait states, etc.

      • For example, in progress states should often be under 24 hours. Make sure items aren’t bottlenecked waiting for QA resources, etc.

Lead Time

  • Tracks from creation to completion

    • For defects and customer requests

      • Group by priority / SLA commitments

Defect/Bug creation over time - filter by backlog

  • Issue creation metric

    • By priority/severity

    • Filter to the relevant backlog of the team. 

Bouncebacks

  • Displays cards that have revisited previous states, like QA back to dev, or dev back to product requirements. 

    • Increase shows churn/uncertainty

      • Also sort by number of bouncebacks on cards. Good retro items

Coding days

  • Low coding days implies blockers towards team having heads-down time to write code

    • Where do we get 3-4 recommended days?

      • Developers should check code in daily. Most scrum teams have a day of meetings/refinements/plannings/etc, so it will almost never be 5

    • What kind of teams get 3-4 days?

      • Typical product/feature based dev teams.

      • Can be less for teams with architects, data scientists, QA, devops

Pull request response time

  • Tracks time to response on PRs, as well as approvals and time to merge

    • Use averages instead of medians (PRs skew overly positive on median)

    • Old pull requests get stale

    • Ideally between 1-3 days

Refined backlog ready to work

  • Try burndown with filters to match team’s backlog

    • Pick an issue state that represents refined work.

    • By story points

Open defects by priority

  • Try issue creation metric

    • Filter to team’s relevant backlog 

    • By defects/bugs

    • Filter out all resolved states

  • No labels