Metrics Glossary

Code

Understanding how the code base has changed over time and where those changes are coming from 

Branches: Active Branches = Active Development Efforts 

  1. Description: The quantity of active branches in your codebase can fluctuate depending on the workload. However, it is crucial to merge inactive branches in a timely manner before they become outdated. Failing to merge them promptly could result in additional work and complications.

  2. How is it calculated?: Active Branches are calculated by looking at all branches across connected services and logging the number of days those branches have been open. The calculation of active branches in a codebase depends on the version control system (VCS) used and the team's criteria. Typically, the process involves identifying the VCS, accessing the repository, listing all branches, defining criteria for activity, evaluating branch history, and determining active branches based on the established criteria. The specific method may vary and can be customized to suit the team's needs, utilizing VCS commands, GUI clients, or third-party tools.

  3. What questions can I answer from this data?

    1. What development branches is my team working on?

    2. How many development branches is my team working on?

    3. What branches should I touch base with my team on because they've been open for longer than x days?

  4. What should I take away from this data?

    1. Typically, you should have more users than the number of branches open. However, follow up with users with more than one active open branch

Code Categorization: Code categorization can track development effort, help understand focus, normalize productivity, and keep the team(s) aligned. 

  1. Description: Our algorithm categorizes a developer's code to provide insights into team focus. While excessive code churn is typically undesirable, it can have various causes. It serves as an indicator rather than a metric to manage by, highlighting issues like evolving requirements or learning new libraries. These indicators aid in problem identification.

  2. How is it calculated?: Code type consists of 6 Categories to help indicate progress within the codebase:

    1. New Work: Features that have been added or new functionality.  This is calculated by looking at net new or net removed lines of code that do not fall into the other buckets.

    2. Legacy Refactor: settling technical debt.  This is calculated by  analyzing code replacements that happen on code that has not been worked on in over 60 days.

    3. Help Others: developer/team member working on a feature or product that may not have been theirs.  This is calculated as new work that happens in a part of the code base that a user typically doesn’t work in. (Ex: If the user has zero commits in a file, and the file is active, the work is marked as Help Others.)

    4. Churn: Work that is going into the same spot, usually to fix bugs or rework problem code.  This is calculated as code that has been rewritten across 2 or more commits in the past 30 days.

    5. Library: Work that comes from code that seems to be imported from other sources.  This is calculated by identifying large single files that are added and are over 100+ lines, and there are no single line modifications to the file, and excluding them

    6. Comments: the amount of in-code documentation.  This is calculated by looking for specific markers in popular file types, and identifying block/multi-line and whole line comments. (Lines with trailing line comments are treated as code)

  3. What questions can I answer from this data?

    1. What type of code is my team writing?

    2. How is the type of code my team is writing changing over time?

    3. What type of code is my team writing? (grouped view)

    4. Does the distribution of work map to the initiatives in place?

  4. What should I take away from this data?

    1. If you are deploying new features but see a lot of “Legacy Refactor” then a conversation with the team about what they are working on is warranted.

    2. A high amount of code churn can imply a developer that in spinning their wheels or struggling, it can also imply future technical debt and code quality challenges.

 

Code Changes: Codebase contributions and removals can provide insights into positive refactoring and contributor seniority.

  1. Description: Code creation and contribution for boilerplate and critical features tend to spike first. Refactoring becomes more important as the project goes along, which either results in the removal of more code or has no effect. It is common to see senior engineers driving refactoring, however if junior engineers exhibit this behavior, it may indicate advancement and increased seniority.

  2. How is it calculated?: This data is calculated by looking at the net lines created minus the net lines removed. If all lines touched for a day are removed, you will see a -1 for that day, if all lines were lines that were added, you will see a +1 for that day. If there were an equal number of lines added and removed, it will be 0. Note that days with no code activity will also appear as 0.

  3. What questions can I answer from this data?

    1. How do the net lines calculation help us assess productivity and efficiency?

    2. Can we identify any correlations between code activity and product milestones? (Ex. Feature releases, bug fixes, etc)

    3. Are developers adding more or removing more from the code base? 

    4. Does the pattern match the work that was expected?

  4. What should I take away from this data?

    1. A code base increase should be anticipated if the team is creating new functionality. The expansion should be minimal if the team is merely maintaining. You should typically observe a contraction in the codebase if the team is reworking.

    2. If certain significant additions or removals follow-up with them on initiatives/events/changes that were the cause of such a dramatic shift and ensure they match their expected work patterns.

Code Duplication: Reveals Code Reusability 

  1. Description: Copy-paste duplication in codebase hinders maintainability and readability. Minimizing this duplication indicates high-quality code, as it reduces the effort of updating repetitive sections and enhances code comprehension.

  2. How is it calculated?: The metric is derived from code similarity within commits. A score of 100 indicates multiple copies of each line, while a score of 0 means all added lines are unique. This metric focuses solely on added lines and does not consider removed lines.

  3. What questions can I answer from this data?

    1. Are developers copying and pasting a large amount of code?

    2. Can we identify patterns or trends in the code similarity metric over time or across different commits?

    3. How can we interpret the scores between 0-100 in terms of code similarity within commits?

  4. What should I take away from this data?

    1. Highly duplicative codebases can be difficult to maintain and are often the source of escaped defects down the line. When significant code duplication occurs it is important to ascertain why and identify methods of reducing duplication.

    2. A key action to take when your codebase has new duplications added is to talk to developers creating the duplication about different strategies to work around duplicative code.

 

Commits: Commits is a volume metric which helps you understand your team's work output. While this doesn't capture everything your team does, it is a good way to identify employees that may be struggling

  1. Description: Commits measures the quantity and frequency of code commits within a specific timeframe, reflecting the development team's activity and engagement. It offers insights into development pace, collaboration levels, and overall project progress. Monitoring commit volume aids in tracking productivity, identifying activity fluctuations, and assessing project velocity.

  2. How is it calculated?: Commits are measured by counting the number of commits made within a specific timeframe. The measurement involves retrieving commit logs and tallying the individual commits during a defined period of time (typically customized by team/project). 

  3. What questions can I answer from this data?

    1. How is my team's commit volume changing over time?

    2. Is my team creating larger or smaller commits?

    3. How often does my team commit code?

  4. What should I take away from this data?

    1. An irregular commit volume can indicate poor developer hygiene. Best practices are to make frequent, granular commits, leading to maintainable iterative units of work.

    2. Infrequent commits are another example of poor development practices, daily commits are typically the best practice standard, also creating the ability to create iterative units of work and better capability to stay in sync with the main codebase.

    3. Large commits create challenging downstream code review problems and can be hard to integrate into the codebase. Best practices are to generate smaller, frequent commits.

Commits by File Type: A codebase can consist of many different technologies or programming languages. Keeping track of the balance can be hard as your product evolves over time. Looking at the new code added by file type can help you understand what skills are needed moving forward for your team.

  1. Description: Analyzing the distribution of file types in a codebase offers insights into code purpose, resource allocation, and team organization. It helps identify growth areas and informs decisions on team restructuring to maintain focus.

  2. How is it calculated?: Allstacks generates this data by looking at all repositories connected to this project and scanning the names of each file in each commit to identify the file types. The value we use for this metric is the total number of lines changed, defined as lines_added + lines_removed for each type of file.

  3. What questions can I answer from this data?

    1. Which file types consistently have the highest or lowest commit volume?

    2. Are there any correlations between commit volume by file type, milestone, development phases, or team activities?

  4. What should I take away from this data?

    1. This data can help you understand if the concentration of work is shifting into new technologies or if certain technologies take more work to maintain than others. It can help you make decisions on how to staff, when to refactor, and how to make investments moving forward.

Commit by Pull Request: Some features are very large, and touch a lot of code. Those tend to be the riskiest, since there is such a large volume of changes. Keeping track of the scale of feature pull requests can help plan for risks

  1. Description: This metric is a good way to identify if you have a very large pull request that isn't getting merged. Some pull requests can linger open due to blockers and may end up being forgotten about in a process. This is a great way to keep tabs on those pull requests to prevent code rot, and eventual duplication of work.

  2. How is it calculated?: This data is generated by looking at the number of changes in all pull requests that are not closed or merged and getting the number of changed lines in commits that are part of those pull requests.

  3. What questions can I answer from this data?

    1. What is the average number of commits per pull request?

    2. Are there correlations between commit volume by PR and other factors (i.e: size of development team, complexity of the project, etc)

    3. What, if any,  trends are being seen over specific periods of time for commit volume by PR?

  4. What should I take away from this data?

    1. Size and complexity: higher commit volume within a PR indicates a larger and more complex set of changes. 

    2. Review and integration effort: PRs with high commit volumes may require more thorough review and testing to ensure code quality 

 

Code Review: 

An indicator of code quality is whether new code and features are peer-reviewed. These metrics can be used to spot risky code and processes.

Merged Pull Requests: Understanding who is merging Pull Requests will help you understand the distribution of work. 

  1. Description: Merged pull requests refer to the successfully incorporated code changes into the main codebase. When a pull request is merged, it means that the proposed code modifications have been reviewed, approved, and merged into the main branch, making them a permanent part of the project's codebase.

  2. How is it calculated? Merged PRs are typically calculated by counting the number of PRs that have been successfully merged into the main branch. 

  3. What questions can be answered from this data?

    1. How many PRs have been successfully merged over a specific period of time? (EX: Sprints, Monthly, Quarterly, etc)

    2. What is the rate of merged PR over time?

    3. Who (contributors or developers) have the most merged PRs?

  4. What should I take away from this data?

    1. Development velocity: Tracking the rate of merged pull requests over time provides insights into the speed at which code changes are being incorporated into the codebase. Consistently high merge rates can indicate a productive and efficient development workflow.

    2. Individual contributions: By examining the contributors with the most merged pull requests, you can identify team members who are actively involved and making significant contributions to the codebase. This can help recognize and acknowledge individual efforts and expertise.

Pull Request Approvals: Permission to merge code changes after review

  1. Description: Pull request approvals involve the review and approval of proposed code changes by designated reviewers, ensuring that the changes meet required standards and align with project goals before being merged into the main codebase. These approvals are essential for maintaining code quality, fostering collaboration, and ensuring a reliable codebase.

  2. How is it calculated? PR approvals are typically calculated by counting the number of approved PRs within a specific timeframe. 

  3. What questions can I answer from this data?

    1. Are the code changes meeting the required standards and aligning with project and development goals?

    2. Are there any patterns and trends in the approval rates over time, in different repos, and or projects?

    3. Are there/have there been any bottlenecks or delays in the approval process?

  4. What should I take away from this data?

    1. The percentage of PRs that have been approved which will typically indicate efficiency and effectiveness of the approval process. Ensuring code quality by evaluating the number of approvals adheres by the standards put in place by the teams’ collaborative efforts 

Pull Request Comments: One indicator of code quality is if new code and features are peer-reviewed. Having other engineers review the code and add comments helps increase confidence in the codebase and ensure the code is easily understood by others. Looking at approvals on Pull Requests can help visualize this.

  1. Description: Pull request comments are a form of communication and feedback exchanged during the code review process. Developers and reviewers can leave comments on specific lines or sections of code within a pull request to discuss improvements, ask questions, provide suggestions, or address concerns. These comments facilitate collaboration, help clarify code changes, and contribute to the overall quality and effectiveness of the review process.

  2. How is it calculated? PR comments are calculated by counting the number of individual comments left by both reviewers and developers. Each comment is considered its own entity. 

  3. What questions can I answer from this data?

    1. How active is the code review process?

    2. Are there unresolved discussions or issues?

  1. Are there any recurring patterns or common feedback?

  2. How effectively is feedback incorporated?

D. What should I take away from this data?

  1. Communication and documentation: Pull request comments serve as a record of communication and discussions related to the code changes. This data can be useful for future reference, documenting decisions made, and maintaining a historical record of the development process.

  2. Feedback and suggestions: Pull request comments provide valuable feedback and suggestions from code reviewers and team members. This data helps identify areas for improvement, potential bugs, or alternative solutions to the proposed code changes.

Pull Request Creations: Initiating changes for review and integration into the codebase

  1. Description: PR creations are the process of submitting proposed changes or additions to a codebase for review and integration. Typical PR creation processes include detailed titles and descriptions, specifying source and target branches, and the ability to attach relevant documentation or reference materials. 

  2. How is it calculated?: While PR creations don’t typically have a numerical value, they are tracked and calculated as events. The easiest way to accomplish this is to identify the start of the period/project/sprint and count the number of PRs created.

  3. What questions can be answered from this data?

    1. How many PRs were created during this specific time period?

    2. Does the number of PRs vary across different teams or projects?

    3. Are there any types of patterns or trends in PR creation?

    4. What is the average frequency of PR creations?

  4. What should I take away from this data?

    1. Understand the impact of changes or initiatives: Analyzing pull request creations before and after implementing specific changes or initiatives can help assess their effectiveness. For instance, if the number of pull requests increases after implementing a new development methodology or tool, it indicates that the change has had a positive impact.

    2. Identify outliers and anomalies: Analyzing pull request creations can help identify outliers or anomalies in the development process. For example, a sudden surge in pull requests may indicate an urgent need or a critical issue that requires attention.

    3. Identify areas for process improvement: By examining pull request creations, you can identify areas where the development process can be optimized. For example, if the volume of pull requests is consistently low, it may indicate a need for encouraging more frequent code submissions or streamlining the code review process.

 

Pull Request Cycle Time: Evaluate your engineering teams' productivity by tracking the amount of time work starts development to the time it's deployed to production.

  1. Description: PR cycle time refers to the duration it takes for a PR to go through the entire review and approval process until it is merged. 

  2. How is it calculated? PR cycle time is measured by the time elapsed between the creation of the PR and its final merging into the codebase. Typically, there are timestamps associated with various events through the PR lifecycle (creation, review, comments, and approval)

  3. What questions can be answered from this data?

    1. How long does it take for pull requests to be reviewed and approved?

    2. Are there any bottlenecks or delays in the pull request process?

    3. Is the pull request cycle time consistent across different team members or reviewers?

    4. Are there any trends or patterns in the pull request cycle time that can help identify areas for improvement in the development workflow?

  4. What should I take away from this data?

    1. Measure team productivity: PR cycle time serves as a metric to measure productivity and responsiveness of the development team.

    2. Optimize development workflow: This metric can be used to reveal patterns/trends during the PR lifecycle which enables teams to help to optimize their workflows. This helps identify areas where automation, process improvements, or additional resources may be needed. 

 

Pull Request Response Time: The time taken to review and respond to PR’s

  1. Description: PR Response time refers to the time it takes for reviewers/collaborators to provide feedback or to take action on a submitted PR within a given project/sprint/etc. It encompasses the time from when a PR is opened until it receives a response, such as comments, suggestions, approvals, or rejections. 

  2. How is it calculated?: While PR response time is not limited to a singular calculation, typically it’s calculated in three steps all based on timestamps. First, identify the start time for when the PR was created or opened. Second, identify the response time based on the timestamp associated with the first response (comment, suggestion, approval, or rejection). Lastly, calculate the elapsed time between the two by subtracting the start time from the first response time. 

  3. What questions can be answered from this data?

    1. Are there any patterns/trends in response time over time or across different reviewers/teams?

    2. Is there a correlation between response time and code quality?

    3. Are there any bottlenecks or delays in the review process due to response times?

    4. How long does it take for PRs to receive feedback?

  4. What should I take away from this data?

    1. Optimize workload distribution: By examining response times across different reviewers or teams can help identify variations in workload distribution. If certain reviewers have consistently longer response times, it may suggest an imbalance in workload and may need additional resources. 

    2. Improve collaboration and assess process improvements: Analyzing response times can highlight the effectiveness of team/project communication and collaboration. Depending on length of response time, it may be an indicator of successful collaboration or shine light if there is room for improvement (lack of timely feedback, set clearer expectations, etc)

    3. Identify and address bottlenecks: By examining response time of PRs, you’re given the ability to identify barriers or delays during the review process. If certain PRs experience longer response times, it may indicate issues such as reviewer availability, resource constraints, and or complex code changes that require more attention. 

    4. Foster culture of continuous improvement: By regularly reviewing and discussing response time metrics, the team has the ability to identify areas for optimization, experiment with different approaches, and track progress over time. 

Pull Request Size: The size of a Pull Request can be an indicator of quality and risk if too large. Use this metric to spot risky code and prevent an increase in bugs created.

  1. Description: Pull request size refers to the extent or magnitude of code changes within a pull request. It represents the amount of code that has been added, modified, or removed in the proposed changes. The size of a pull request can vary, ranging from small, incremental changes to larger, more substantial modifications that impact multiple files or functionalities.

  2. How is it calculated?: Pull request size is typically calculated by measuring the number of lines of code added, modified, or removed in the proposed changes.

  3. What questions can we answer from this data?

    1. How extensive are the ode changes in each PR?

    2. Are there any trends on the size of PR requests (ex: really small vs very large)

  1. Has there been any noticeable trends in PR size over time?

  1. What should I take away from this data?

    1. Trends over time: Tracking the changes in pull request size over time can reveal patterns, such as increasing or decreasing sizes, which can provide insights into the evolution of the project, team dynamics, or development practices.

    2. Distribution of sizes: Examining the range of pull request sizes can reveal patterns, such as a majority of small or large pull requests, which can indicate different development practices or challenges.

Pull Requests: Code Review and Collaboration Process

  1. Description: Open pull requests represent new features or bug fixes waiting for review. Identifying aging or old pull requests can help you discover features that have been abandoned or have other issues introduced into production.

  2. How is it calculated?: Open PRs are calculated by counting the number of PRs that have been created but not yet closed or merged 

  3. What questions can I answer from this data?

    1. How many pending changes are currently awaiting review and merging?

    2. Where/are there any bottlenecks or delays in the review and merging process?

    3. Can we identify areas or features that are currently under active development or modification?

  4. What should I take away from this data?

    1. Collaboration and code review process: Open pull requests highlight the ongoing collaboration and code review process within the team. Reviewing and merging pull requests is crucial for maintaining code quality and ensuring that changes are properly validated.

    2. Insight into the current workload: The number of open pull requests provides an indication of the pending changes awaiting review and merging, giving an understanding of the current workload on the development team.

    3. Transparency and visibility: Monitoring open pull requests promotes transparency and visibility within the team and across stakeholders. It allows everyone involved to have a clear overview of the ongoing development activities and the progress being made on different features or changes.

Builds

These build metrics are indicators of developer productivity and code quality based on the average time a build takes and the success rate by developer so you can easily spot risks and areas of improvement.

Build Success Rate: Managing features by branches leads to a cleaner, well-maintained code. Looking at how the users are following standards will identify risks in your code and areas for developers to improve.

  1. Description: Build success rate refers to the probability of successfully compiling and building software code without errors. Factors influencing this rate include code quality, accurate build configurations, effective version control, automated testing, and a reliable build infrastructure. Proper error handling, comprehensive logging, and real-time monitoring contribute to a higher success rate. Project-specific factors such as code complexity and dependencies also impact the build success rate. Evaluating and optimizing these factors are crucial for improving the reliability and efficiency of the code building process.

  2. How is it calculated? Build success rate is typically calculated by dividing the number of successful builds but the total number of attempted builds and then multiplying by 100 to get a percentage. 

  3. What questions can I answer from this data?

    1. How does the build success rate vary over time?

    2. Are there specific types of errors or failure points that frequently contribute to build failures?

    3. Can improvements in code quality, test coverage, or build configurations lead to a higher build success rate?

    4. What are some factors that would impact the success rate? (Ex: size of codebase, number or dependencies, etc0

  4. What should I take away from this data?

    1. Pinpoint Common Failure Points: By analyzing the data, it becomes possible to identify the most common types of build failures or errors. This knowledge helps prioritize efforts to address these specific failure points, such as enhancing error handling, improving test coverage, or revisiting problematic code areas. Focusing on these areas can contribute to a higher build success rate.

    2. Evaluate the Impact of Process Changes: Data analysis enables the evaluation of the impact of process changes or improvements on build success rates. By measuring the success rate before and after implementing specific changes, it becomes possible to determine the effectiveness of those changes. This feedback loop facilitates iterative improvements and helps refine development practices over time.

    3. Compare Performance Across Teams or Projects: Analyzing build success rates allows for a comparison of performance across different teams, projects, or repositories. This comparison can highlight disparities and provide insights into best practices employed by high-performing teams or projects. Sharing these insights and promoting knowledge transfer can foster improvements across the organization.

Build Time: The average amount of time that it takes a build to complete. Long build times can cause delays in developer activity and impact productivity.

  1. Description: Average duration it takes to build or compile a specific project. It measures the time required to transform the code into executable code or artifacts. 

  2. How is it calculated? Typically, average build time is calculated by measuring the duration of each build process, from start to finish, and then finding the acreage across the specified period. It reflects the time required to transform code into a build artifact, and is obtained by summing the build times and dividing the number of builds conducted. 

  3.  What questions can I answer from this data?

    1. How does the average build time compare to the defined expectations?

    2. Are there significant differences in build times across different code branches?

    3. Are there specific time periods when build times tend to be slower vs faster?

    4. Are there any improvements or optimizations that can be made to reduce build times and improve efficiency?

  4. What should I take away from this data?

    1. Evaluating build optimizations: By comparing average build times before and after implementing optimizations, it becomes possible to assess the effectiveness of improvement efforts. This allows teams to make informed decisions on which optimizations to pursue further or explore alternative strategies for faster builds.

    2. Tracking performance trends: Monitoring changes in average build times over time can help identify performance trends. Sudden increases in build times may indicate underlying issues that need attention, while decreasing build times may reflect improvements in build infrastructure or codebase optimizations.

    3. Improving developer experience: Faster build times contribute to a smoother development experience, reducing waiting time and enabling developers to iterate more quickly. Analyzing average build times helps teams prioritize initiatives that enhance developer productivity and satisfaction.

    4. Optimizing resource allocation: Analyzing average build times can help identify resource-intensive builds or code segments. This information can guide resource allocation decisions, such as optimizing build servers, parallelizing build processes, or prioritizing performance optimizations for specific code modules.

Process Health

These metrics give you an understanding of how work is progressing and identify areas of risks and bottlenecks so you can more confidently predict when work will be completed.

Burndown; The burndown chart helps you visualize how much work and what kind of work is left within a particular project so you can determine when the work will be done or where your team may need additional support.

  1. Description: Burndown charts show the amount of work remaining or unresolved issues on the vertical axis and time on the horizontal axis. The chart starts with an initial baseline and tracks the decrease in work or resolved issues over time. By comparing the actual progress to the ideal trend line, the burndown chart helps teams and stakeholders monitor progress, identify deviations, and make informed decisions to stay on track and meet project goals.

  2. How is it calculated?: Burndown charts are calculated by defining the initial scope of work or unresolved issues and establishing time increments. As work progresses or issues are resolved, the remaining work or unresolved issues are tracked and plotted on the chart. An ideal trend line is also included to represent the expected progress. This provides a visual representation of progress over time and helps teams monitor their actual progress compared to the planned trajectory.

  3. What questions can I answer from this data?

    1. How are specific projects progressing in terms of work completion or issue resolution? 

    2. Are there any deviations from the expected progress?

    3. Is the project likely to be completed within the planned time frame?

    4. How effective are the team’s efforts in completing work or resolving issues?

  4. What should I take away from this data?

    1. Risk Identification: Burndown charts can highlight risks and issues that may affect project progress. Sudden changes in the trend or unexpected spikes in the remaining work can indicate challenges that require immediate attention. It allows teams to proactively address risks and mitigate potential delays.

    2. Communication and Transparency: Burndown charts provide a clear visual representation of project progress, which can be shared with stakeholders and team members. It fosters transparency, improves communication, and ensures everyone involved has a shared understanding of the project's status.

    3. Performance Evaluation: By comparing the actual progress with the ideal trend line, teams can evaluate their performance. Deviations from the trend line can indicate issues or delays that need attention. It helps identify areas where improvements can be made to enhance productivity and efficiency.

    4. Forecasting and Planning: Burndown charts enable project managers to forecast project completion based on the current progress and trend. It helps in adjusting plans, resource allocation, and managing stakeholder expectations. It provides insights into whether the project is likely to be completed within the planned time frame.

 

Committed vs. Completed: This metric gives you an idea of how your sprint is progressing. It shows items committed at the start of the sprint, items added after the sprint started, items incomplete and completed during the spring. Understanding if your team is over or under-committing helps you plan better.

  1. Description: "Committed" refers to the planned or estimated amount of work or tasks that a team or individual has committed to completing within a specific timeframe. It represents the initial plan or agreement for what needs to be accomplished. On the other hand, "completed" refers to the actual amount of work or tasks that have been finished or achieved within the given timeframe. It represents the tangible outcome or deliverables that have been accomplished based on the initial commitment. Comparing the committed and completed values helps assess the progress and performance of the team or individual in meeting their commitments and project goals.

  2. How is it calculated? 

    1. Committed: The committed value is determined by estimating or planning the amount of work or tasks that need to be completed within a specific timeframe. This estimation is typically based on the project requirements, scope, and available resources. The committed value can be expressed in various units, such as story points, hours, or tasks.

    2. Completed: The completed value is determined by tracking the actual progress and work that has been finished or achieved within the same timeframe. This is typically measured by recording the completed tasks or deliverables, marking them as "done," or updating the project management system with the progress made.

  3. What questions can I answer from this data?

    1. Are we on track to meet our commitments?

    2. Are there any discrepancies between committed and completed values?

    3. What is the team’s performance in meeting commitments?

    4. How can we improve our commitment accuracy and completion rate?

  4. What should I take away from this data?

    1. Communication and Transparency: Committed vs completed values provides a clear and visual representation of project progress, which can be shared with stakeholders. It fosters transparency, improves communication, and ensures a shared understanding of the project's status and achievements.

    2. Learning and Process Improvement: Committed vs completed values allow for learning and process improvement. Teams can analyze the factors contributing to successful completion or areas where improvements are needed. It facilitates a continuous improvement mindset and helps refine project management practices.

    3. Performance Evaluation: Viewing the committed vs completed values enables teams to evaluate their performance in meeting their commitments. It helps identify areas where the team may be exceeding or falling short of expectations, allowing for adjustments to be made in planning, resource allocation, or task management.

    4. Progress Assessment: Comparing the committed and completed values helps assess the overall progress of the project or specific tasks. It provides a clear indication of how much work has been completed relative to the initial commitments, allowing teams to gauge their progress accurately.

 

Cycle Time: How long have issues been in progress can help you resolve problems by identifying places where your team may need additional support.

  1. Description: Cycle time refers to the duration it takes to resolve or complete an issue from its creation to its closure. It measures the time from when an issue is reported or identified until it is considered resolved or fixed. The cycle time includes all the necessary steps involved in addressing the issue, such as analysis, development, testing, and deployment. Monitoring and analyzing issue cycle time helps identify bottlenecks, optimize workflows, and improve the efficiency of issue resolution processes.

  2. How is it calculated? Cycle time is calculated by measuring the duration between the creation or identification of an issue and its resolution or closure. This is done by subtracting the start time from the end time. Analyzing issue cycle time helps identify bottlenecks, optimize workflows, and improve the efficiency of issue resolution processes.

  3. What questions can I answer from this data?

    1. How long does it take, on average, to resolve different types of issues?

    2. Are there any bottlenecks or delays in the issue resolution process?

    3. How has the issue resolution time changed over time?

    4. Are there any outside/additional factors that correlate with cycle time?

    5. How does cycle time impact customer satisfaction or project timelines?

  4. What should I take away from this data?

    1. Efficiency and Process Optimization: Analyzing cycle time helps identify bottlenecks, delays, or inefficiencies in the issue resolution process. It allows teams to streamline workflows, optimize resource allocation, and implement process improvements to reduce cycle time and enhance overall efficiency.

    2. Resource Planning and Workload Management: By understanding the average cycle time for different types of issues, teams can better plan and allocate resources. They can ensure that the workload is distributed effectively, deadlines are met, and teams are appropriately staffed to handle the volume and complexity of issues.

    3. Realistic Expectations and SLA Compliance: Analyzing cycle time provides insights into realistic expectations for issue resolution. It helps set SLA targets and monitor compliance. Teams can ensure they are meeting their commitments and delivering timely resolutions to customers or stakeholders.

    4. Continuous Improvement and Performance Evaluation: Analyzing cycle time over time allows teams to track performance, identify trends, and make data-driven decisions for continuous improvement. It helps evaluate the effectiveness of process changes or optimizations, allowing teams to refine their practices and enhance overall performance. 

Cycle Time Histogram: Identify how quickly issues are being completed from various states and easily identify which issues have longer cycle times. This analyzes how cycle time can differ for issues that encountered alerts versus items that didn’t. This metric is a key input for our forecasting algorithm.

  1. Description: A cycle time histogram is a graphical representation of the distribution of cycle times for a set of events or processes. It displays the frequency or count of events within specific time intervals. Typically, the horizontal axis represents the cycle time intervals, while the vertical axis represents the frequency or count of events falling within each interval. The histogram provides a visual summary of the distribution, allowing teams to identify the most common or frequent cycle times and understand the overall variability in the process.

  2. How is it calculated? Cycle time histogram is created by collecting cycle time data, determining time intervals or bins, counting the frequency of events within each interval or bins, and plotting the histogram accordingly. It provides a visual representation of the distribution of cycle times, helping identify common or frequent cycle times and understand process variability. 

  3. What questions can I answer from this data?

    1. How much, if any, process variation is there?

    2. How does the cycle time distribution align with the SLAs or customer expectations?

    3. How does the cycle time compare across different categories or subgroups?

  4. What should I take away from this data?

    1. Central Tendency: The cycle time histogram allows for the identification of the central tendency, such as the mode, median, or mean cycle time. This provides a measure of the typical or average time it takes to complete the process. Understanding the central tendency helps in setting expectations and evaluating the overall performance of the process.

    2. Variation and Dispersion: The histogram also illustrates the variation and dispersion of cycle times. It shows the spread or range of cycle times observed, as well as any patterns or clusters within the distribution. Analyzing the variation helps identify areas of high variability and potential bottlenecks that may need attention for process improvement.

    3. Outliers and Anomalies: The histogram can reveal outliers or unusual data points that fall far outside the typical range of cycle times. These outliers may represent exceptional cases or issues that require investigation. Identifying and addressing outliers can help improve process efficiency and reduce the occurrence of unusually long or short cycle times.

 

*DORA Lead Time: Enabled and configured by Allstacks (with CI/CD tools connected), DORA (DevOps Research and Assessment) lead time delivers unbiased data to assess teams’ efficiency and spur product development

  1. Description: DORA Lead time is a metric that tracks time it takes for code to change from commit to production. It also gives additional insight into the efficiency of development and release processes amongst a team. 

  2. How is it calculated? DORA lead time is typically calculated by subtracting the start time (when the code change was committed) from the end time (when the change is deployed or released into production). 

  3. What questions can I answer from this data?

    1. How does lead time differ from different environments, such as development, staging, and production?

    2. How does lead time compare to industry standards or best practices?

    3. Are there any correlations between lead time and other performance metrics, such as deployment frequency or change failure rate?

    4. Are there any patterns in lead time based on size or complexity of code changes?

  4. What should I take away from this data?

    1. Efficiency of development process: DORA lead time can show how quickly code changes move from commit to production, highlighting the efficiency of the development to release process. Shorter lead times typically indicate faster delivery of new features, bug fixes, and enhancements. 

    2. Variations across teams and projects: Analyzing lead time can uncover variations in performance across different teams or projects. It can help identify high-performing teams and those that may require extra support or process improvement. 

    3. Relationships with other metrics: When analyzing DORA lead time in conjunction with other metrics, such as deployment frequency or change failure rate, it can reveal dependencies and correlations.

    4. ROI of process improvements: DORA lead time gives organizations the ability to evaluate the effectiveness of process improvements and investments made to optimize delivery pipelines. This ensures that any decisions made in regards to changing processes are data-driven, have the necessary resource allocation, and enhance future improvement efforts. 

Issues Age: How long issues have been in a particular state can help you resolve bottlenecks by identifying where the team may need additional support or an opportunity for process improvements.

  1. Description: Active issues age refers to the duration that unresolved issues have been present in the codebase. It helps assess the efficiency of issue resolution and prioritize efforts. Tracking active issues age highlights bottlenecks and areas needing attention. It measures the team's responsiveness and commitment to maintaining a healthy codebase

  2. How is it calculated?: Active issues age is calculated by subtracting the creation time of an issue from the current time to determine its duration. This calculation helps track how long issues have been unresolved in the codebase.

  3. What questions can I answer from this data?

    1. What is the average age of active issues and how quickly are new issues addressed and resolved?

    2. Does the age of active issues correlate with issue severity or impact?

    3. Are there any specific areas in the codebase with a higher average age of active issues?

  4. What should I take away from this data?

    1. Issue Resolution Efficiency: Analyzing active issues age helps assess the efficiency of issue resolution efforts. It highlights how promptly issues are being addressed and resolved. Teams can identify any bottlenecks or areas where issues tend to remain unresolved for a significant period, enabling them to streamline their resolution processes and improve efficiency.

    2. Prioritization of Issue Resolution: Issues age helps in prioritizing issue resolution efforts. By focusing on long-standing or persistent issues with high ages, teams can ensure that critical issues are addressed promptly. It assists in allocating resources effectively and resolving issues that have the most significant impact on software quality or user experience.

    3. Continuous Improvement: Analyzing active issues age over time provides insights into the progress of issue resolution efforts. Teams can observe trends and patterns in the age of issues, enabling them to monitor the effectiveness of their mitigation strategies and make data-driven adjustments. It promotes a culture of continuous improvement by identifying areas for refinement and optimization in the issue management process.

Issue Label: Understanding what is actively being worked on is difficult. Looking at your issue labels by day can provide a good indicator of what your team is working on and what projects are progressing.

  1. Description: 

  2. An issue label is a categorization or tag assigned to a specific problem or bug identified within a software project's codebase. It helps in organizing and prioritizing the issues, allowing developers to easily track and address them. Issue labels provide a standardized way to communicate and collaborate among team members, facilitating efficient debugging and software maintenance processes.

    1. How is it calculated? Issue label is typically based on the severity and impact of the problem, as well as its priority for resolution. Factors such as the level of functionality affected, potential risks or consequences, and the urgency of the issue are taken into account to determine an appropriate label. This ensures that issues are categorized accurately, allowing developers to focus their attention on critical problems first.

    2. What questions can I answer from this data?

      1. What are the most common types of issues or bugs currently being encountered?

      2. Are there specific labels that are frequently assigned to issues reported by end-users, indicating potential usability or functionality concerns?

      3. Is there a correlation between the complexity of the code and the frequency of issues reported?

      4. How effectively are issues being categorized and labeled based on their severity and impact?

    3. What should I take away from this data?

      1. Prioritization of tasks: By understanding the distribution of issue labels, teams can prioritize tasks based on their severity, impact, and urgency. This allows them to focus on critical issues first and allocate resources effectively.

      2. Resource allocation: By analyzing issue labels, teams can gain insights into the areas of the codebase that require more attention or investment in terms of development resources, testing efforts, or documentation.

      3. Performance monitoring: Monitoring the resolution time and reopening rate of different issue labels provides visibility into the team's performance in addressing and resolving issues. It can help identify bottlenecks, inefficiencies, or areas for process optimization.

      4. Continuous improvement: By regularly analyzing code issue labels, teams can track their progress over time, measure the effectiveness of their efforts in issue resolution, and identify areas for continuous improvement in software development practices.

Issue Lead Time: Understand how long an issue will take based on recent history to complete the current phase. This is helpful for identifying critical issues that need to make it through ASAP.

  1. Description: Issue lead time refers to the duration it takes for an issue, such as a bug or a feature request, to be resolved from the moment it is reported or identified. It measures the time elapsed between issue creation and its closure. Issue lead time includes the time spent on issue triaging, analysis, development, testing, and any necessary iterations. Monitoring and reducing issue lead time is crucial for improving the efficiency of development processes, enhancing customer satisfaction, and maintaining a responsive software development lifecycle.

  2. How is it calculated? Issue lead time is the duration between the creation and closure of an issue. It is calculated by subtracting the timestamp of issue creation from the timestamp of issue closure. 

  3. What questions can I answer from this data? 

    1. What strategies or process improvements can be implemented to reduce lead time and improve issue resolution efficiency?

    2. Are there any correlations between lead time and factors such as issue severity, complexity, or priority?

    3. Which stages in the development process contribute the most to the overall lead time?

    4. Are there specific types of issues or categories that consistently have longer lead times?

  4. What should I take away from this data?

    1. Process Efficiency: Analyzing issue lead time helps identify areas of inefficiency within the development process. It highlights stages or steps that contribute significantly to lead time, allowing teams to focus on process optimization and reducing unnecessary delays.

    2. Resource Allocation: Understanding the distribution of issue lead time provides insights into resource allocation. It helps identify where development resources are being spent the most and allows teams to allocate resources more effectively to improve overall productivity and issue resolution speed.

    3. Predictability: Analyzing issue lead time can help teams establish benchmarks and predict future timelines for issue resolution. By understanding the historical lead time data, teams can set more accurate expectations, manage stakeholder communication, and make realistic commitments.

    4. Continuous Improvement: By consistently analyzing issue lead time, teams can monitor their progress over time and identify areas for continuous improvement. It enables teams to track the impact of process changes, evaluate the effectiveness of their efforts, and implement strategies to reduce lead time and enhance overall development efficiency.

Issue Type: Looking at issues by take will help you understand what your teams are working on.

  1. Description: Issue type refers to the categorization or classification of a reported problem or request within an issue tracking system. It provides a way to differentiate between different types of issues, such as bugs, feature requests, improvements, or documentation needs. Issue types help in organizing and prioritizing tasks, allowing development teams to effectively address and resolve specific types of issues based on their nature and impact. They provide a standardized framework for classifying and managing issues throughout the software development lifecycle.

  2. How is it calculated? Issue type is typically determined based on the nature or category of the reported problem or request. The calculation involves analyzing the information provided by the reporter and making an informed decision on the appropriate issue type. This decision is usually made by the person responsible for triaging or managing the issues in the tracking system, taking into consideration factors such as the description, symptoms, and context of the reported issue. It is a subjective determination based on the available information and the team's understanding of the issue.

  3. What questions can I answer from this data?

    1. How effectively are the issue types being used and categorized by the reporting stakeholders?

    2. What impact do different issue types have on the overall development timeline and resource allocation?

    3. Are there specific issue types that occur more frequently in certain areas or modules of the codebase?

    4. How does the distribution of issue types align with the project's development priorities and goals?

  4. What should I take away from this data

    1. Issue Prioritization: Analyzing issue types helps in prioritizing tasks based on their nature and impact. It allows development teams to focus on critical bugs, address feature requests, and plan improvements accordingly. By understanding the distribution of issue types, teams can allocate resources effectively and address the most impactful issues first.

    2. Development Insights: Analyzing issue types provides insights into the nature of reported problems and requests. It helps identify common patterns or trends, enabling teams to gain a deeper understanding of the challenges faced by users or stakeholders. This information can guide development decisions, such as architectural changes, quality improvements, or usability enhancements.

    3. Resource Allocation: Understanding the distribution of issue types helps in allocating development resources effectively. Different issue types may require varying skill sets, expertise, or time commitments. By analyzing issue types, teams can ensure that the right resources are assigned to address specific types of issues, resulting in more efficient development processes.

    4. Process Improvement: Analyzing issue types provides opportunities for process improvement. It helps identify areas where specific types of issues are recurring, indicating potential areas for code refactoring, automation, or documentation improvement. This analysis can drive continuous process improvement efforts and enable teams to enhance their overall development practices. 

Issues: Visualizing the number of issues in progress helps see the distribution of work and determine when a project will be completed.

  1. Description: Active issues refer to ongoing challenges that affect the efficiency and reliability of development. These issues can include build failures, long build times, inadequate testing, complexity, inconsistent build environments, and poor monitoring. This is shown in our metric by a burndown chart that includes resolved states.

  2. How is it calculated? To calculate issues in progress you first need to establish a project management or issue tracking system such as Jira. This system allows you to create and manage tasks efficiently. Once you break down your project into smaller issues, you can categorize them into statuses like "To Do," "In Progress," or "Done," and assign them to team members. As team members work on their assigned tasks, they should update the task status to "In Progress" to indicate active work. By monitoring the "In Progress" category, you can easily track which issues are currently being worked on. Analyzing the number of issues in progress provides valuable insights into team productivity and efficiency. Regularly tracking progress over time using this approach allows for better project management. It's important to note that while specific tools and terminologies may vary, these fundamental principles remain applicable. 

  3. What questions can I answer from this data?

    1. What seem to be the most common types of issues found?

    2. What is the frequency of issues? Do there seem to be patterns or correlations between them?

    3. What is the average time it takes to resolve active issues?

    4. Are there specific components that have a higher occurrence of issues?

  4. What should I take away from this data? 

    1. Monitoring the Impact of Mitigation Efforts: Analyzing active issues allows teams to track the effectiveness of their mitigation efforts. By monitoring the resolution rate and recurrence of specific issues, teams can gauge the impact of their actions and make data-driven adjustments to their strategies

    2. Opportunities for Code Refactoring and Improvement: Active issues highlight areas of the codebase that require refactoring or improvement. By addressing these issues, teams can enhance code maintainability, readability, and performance.egies.

    3. Insight into Root Causes: Identifying patterns or correlations among active issues can help uncover underlying root causes, such as problematic coding practices, architectural deficiencies, or lack of proper testing. Addressing these root causes can lead to more effective and lasting solutions.

    4. Identification of Common Pain Points: Analyzing active issues helps identify the most common and recurring challenges faced in the codebase. This allows teams to focus their efforts on addressing these pain points systematically.

 

Team Productivity

These metrics help you understand team performance, trends, and visibility into what is being worked on.

Issues Activity: Looking at Issue Activity can show you how your team is managing each aspect of the development process. You can see where your team is spending the most time, and there they might need help, training, or additional resources. 

  1. Description: Issues activity refers to problems or challenges that arise during different stages of a project. These issues can include cards stuck in the "To Do" state, indicating a backlog of tasks that haven't been started, which may indicate resource constraints or unclear priorities. Additionally, cards lingering in the "In Progress" state for an extended period may indicate bottlenecks, lack of collaboration, or underestimated task complexity. Cards frequently returning to the "In Review" state may suggest issues with quality control or insufficient feedback loops. Lastly, cards remaining in the "Done" state may raise concerns about project scope creep or difficulty in achieving project closure and delivering final outcomes.

  2. How is it calculated? Calculating issues by activity state involves defining activity states and categorizing card types. The process includes tracking issues within each state for each card type, analyzing the data collected, and calculating metrics such as the number or percentage of cards with issues in each state by card type. This calculation helps identify patterns, areas for improvement, and potential bottlenecks in the project workflow. The exact calculation method may vary depending on the project management tools and processes employed.

  3. What questions can I answer from this data?

    1. Are certain card types more likely to have issues in specific activity states?

    2. How do issues impact overall project progress and timelines?

    3. Are there any correlations between issues and specific project team members or roles?

    4. What are the most common types of issues encountered in each activity state?

  4. What should I take away from this data?

    1. Workflow Bottlenecks: Issues by activity state can help identify bottlenecks or stages in the project workflow where issues frequently occur. This insight allows project managers to focus their efforts on improving those specific areas to streamline the overall process.

    2. Resource Allocation: By analyzing the data, project managers can identify if certain activity states consistently require more resources or encounter more issues. This information helps in allocating resources effectively and ensuring sufficient support during critical stages of the project.

    3. Improvement Opportunities: Data on issues by activity state highlights areas that require improvement. By understanding the common types of issues and their occurrence at specific activity states, project teams can implement targeted process improvements, training programs, or corrective actions to prevent or mitigate similar issues in the future.

    4. Project Performance and Timelines: Analyzing the data can provide insights into the impact of issues on project performance and timelines. It helps project managers assess the overall effect of issues on project progress, identify potential delays, and make informed decisions to mitigate risks and ensure timely delivery.

Issue Bouncebacks: See when an issue is returned to a previous state which usually indicates a quality issue such as a failure or a bug that needs rework.

  1. Description: Issue bouncebacks typically refer to situations where a task or project issue that was thought to be resolved or completed reappears or resurfaces later on. It represents a setback or regression in the project's progress and requires additional effort to address. Issue bouncebacks can occur due to various reasons, such as incomplete resolution, unforeseen dependencies, inadequate testing, or changing project requirements, and they often require rework or adjustments to ensure proper resolution.

  2. How is it calculated? Issue bouncebacks are typically calculated by tracking the number of issues that reoccur or resurface after they were believed to be resolved or completed. This involves recording and monitoring instances where an issue that was previously marked as resolved or closed requires further attention or rework. By analyzing the frequency and impact of issue bouncebacks, project managers can assess the effectiveness of their issue resolution processes and identify areas for improvement.

  3. What questions can I answer from this data?

    1. What is the frequency and pattern of issue bouncebacks?

    2. What are the root causes of issue bouncebacks?

    3. How do issue bouncebacks impact project timelines and resources?

    4. Are there any recurring patterns or lessons learned from issue bouncebacks?

    5. How effective are the issue resolution processes and actions taken to prevent bouncebacks?

  4. What should I take away from this data?

    1. Identification of Root Causes: Project managers can identify the common root causes of issue bouncebacks. This understanding helps in addressing underlying issues, improving processes, and implementing preventive measures to minimize bouncebacks in the future.

    2. Process Improvement Opportunities: Data analysis highlights areas where the issue resolution processes can be enhanced to reduce the occurrence of bouncebacks. It provides valuable insights into the effectiveness of current practices and identifies opportunities for streamlining workflows, enhancing testing procedures, or improving communication and collaboration.

    3. Resource Allocation Optimization: Project managers can identify tasks or areas prone to bouncebacks and allocate additional resources or adjust schedules accordingly to minimize the impact on overall project timelines.

    4. Lessons Learned and Continuous Improvement: By capturing on issue bouncebacks, project teams can extract valuable lessons learned. These insights can be used to drive continuous improvement, refine project management practices, and implement proactive measures to prevent similar issues in future projects. 

Issue Creation: The rate of new issues created can show you trends in the amount of work coming in and indicate if you are planning too much work for your dev team.

  1. Description: Issue creations over time refer to the pattern and frequency of new issues being identified and logged throughout the course of a project. It involves tracking and analyzing the number of issues created during specific time periods, such as days, weeks, or months. This data provides insights into the project's issue discovery rate, potential trends in issue identification, and can help in identifying periods of increased or decreased issue activity. Monitoring issue creations over time helps project teams understand the evolving nature of project challenges and aids in resource allocation and proactive issue management.

  2. How is it calculated? Issue creations over time are usually calculated by recording the number of newly identified issues within specific time intervals, such as daily, weekly, or monthly. This calculation involves tracking the date of issue creation and aggregating the data to determine the number of issues created during each time period. Project management tools or issue tracking software often provide features and reports that facilitate the calculation and visualization of issue creations over time.

  3. What questions can I answer from this data?

    1. Are there any patterns or spikes in issue creation?

    2. How does issue creation correlate with project activities or milestones?

    3. Which types of issues are frequently created over time?

    4. What is the trend in issue creation over time?

  4. What should I take away from this data?

    1. Trend Analysis: By analyzing issue creations over time, project teams can identify overall trends in issue discovery. This insight helps in understanding whether the number of issues is increasing, decreasing, or remaining relatively stable. It allows project managers to anticipate potential challenges and allocate resources accordingly.

    2. Seasonality or Periodic Patterns:  Reviewing issue creations over time can reveal any recurring patterns or seasonality in issue creations. Identifying periods of increased or decreased issue activity can help project teams understand the factors contributing to these patterns, such as project milestones, resource availability, or external influences. This understanding aids in planning and managing resources effectively during peak periods.

    3. Issue Hotspots: Analyzing issue creations over time can pinpoint specific phases or areas of the project that consistently experience a high number of issues. This insight allows project managers to focus on these "issue hotspots" and implement targeted strategies to address the underlying causes, such as improving requirements gathering, enhancing testing procedures, or providing additional training or support.

    4. Impact of Mitigation Efforts: By comparing issue creations before and after the implementation of mitigation efforts, project teams can evaluate the effectiveness of these actions in reducing issue occurrences over time. This analysis helps in assessing the impact of process improvements, corrective actions, or lessons learned and allows for continuous refinement of project management practices. 

Issues Created vs Completed: This will show you the rate of completing issues compared to how many issues are being created. This can be an indicator of potential quality or capacity challenges due to a growing backlog.

  1. Description: Created vs Completed measures the rate at which issues are being created compared to the rate at which they’re being resolved. It provides insight into the efficiency and progress of a project or team. Typically, the number of issues completed indicates a backlog of works, while the opposite suggests efficient problem resolution. Monitoring these metrics helps managers or project leaders identify bottlenecks and allocate resources effectively in order to keep projects on track. 

  2. How is it calculated? Created vs completed is calculated by comparing the number of issues created to the number of issues resolved within a given time frame. The ratio between these two quantities provides a measure of the rate at which new issues are being generated versus the rate at which they are being addressed. This metric helps gauge the efficiency and progress of a project or team.

  3. What questions can I answer from this data?

    1. Are there any patterns or trends in issue creation and resolution?

    2. How productive is the team in resolving issues? 

    3. How effective are process improvements or changes?

    4. Is the team falling behind or staying on track?

  4. What should I take away from this data?

    1. Workload management: The analysis can reveal if there is a balance between issue creation and resolution. If the number of issues created consistently exceeds the number of issues completed, it suggests a potential backlog and the need to allocate more resources or optimize workflows to manage the workload effectively.

    2. Impact of changes or improvements: If changes have been implemented in workflows, resources, or team structures, analyzing the data before and after the changes can help evaluate their impact on issue resolution. This information can guide future decision-making and adjustments.

    3. Efficiency and productivity: By examining the rate of issue completion compared to issue creation, you can assess the efficiency and productivity of the team or individuals involved. Consistently high completion rates indicate effective problem-solving and a productive workflow.

    4. Individual or team performance: Analyzing data can help identify individuals or teams that excel in issue resolution. Their strategies and approaches can be shared with others to improve overall performance and provide guidance on best practices.

Issues Worked On: Understand team bandwidth by seeing the number of issues they are working on.

  1. Description: Issues worked on typically refers to problems, tasks, or projects that are currently being addressed by an individual or team. These issues can arise from various sources such as customer requests or feedback, bugs, feature requests, or internal requirements. 

  2. How is it calculated?: Calculating the number of issues being worked on typically involves tracking the total count of active or ongoing issues at a given point in time. This can be done by counting the number of open or in-progress issues in an issue tracking system or project management tool. The calculation is a simple count or tally of the issues that are currently being addressed or actively worked on by the team or individuals involved.

  3. What questions can I answer from this data?

    1. Are there recurring issues or specific types of issues that frequently experience delays or roadblocks? 

    2. What are the priority levels of the issues being worked on?

    3. How long does it take, on average, to resolve or complete an issue?

    4. How many issues can the team or individual handle simultaneously?

  4. What should I take away from this data

    1. Workload assessment: You can gain insights into the distribution and volume of issues being worked on by team members or individuals. This allows for workload assessment, identification of potential bottlenecks, and balancing workloads to ensure efficient utilization of resources.

    2. Progress tracking: Examining the status of issues being worked on provides visibility into the progress of individual issues and the overall project. It helps identify completed, in-progress, and unresolved issues, allowing for better tracking of project milestones and timely resolution.

    3. Performance evaluation: This metric enables the evaluation of performance metrics, such as average resolution time or adherence to deadlines. It helps identify areas of improvement, assess individual or team productivity, and establish benchmarks for future projects.

    4. Resource optimization: Analyzing this data reveals patterns and trends in issue types, priorities, or dependencies, enabling effective resource allocation. It assists in identifying areas where additional resources or expertise may be required, optimizing workflows, and making informed decisions for better resource management.

 

Items Worked On: Understand productivity trends and capacity by viewing the number of items an individual has worked on.

  1. Description: Items worked on generally refer to the tasks, assignments, or deliverables that individuals or teams are actively working on within a project or workflow. These items can include specific work tasks, projects, objectives, or milestones that contribute to the overall progress and completion of a project. They are typically tracked and managed to ensure efficient allocation of resources, timely completion, and successful project execution.

  2. How is it calculated? The calculation of items being worked on typically involves tracking and counting the number of active or ongoing items within a given timeframe. The specific method of calculation can vary depending on the project management system or tracking tools being used. It often involves reviewing the status or progress of each item and identifying those that are in progress, being actively worked on, or have not yet been completed. The count of these items provides the calculation of items being worked on at a particular point in time.

  3. What questions can I answer from this data?

    1. How many items are being worked on by each team member?

    2. What is the status of each item being worked on?

    3.  How efficiently are items being worked on?

    4. Are items being worked on in alignment with the project timeline and milestones?

    5. How effectively are resources being allocated to items being worked on?

  4. What should I take away from this data?

    1. Progress tracking: Examining this metric provides insights into the status and progress of individual items and the overall project. It helps identify completed, in-progress, or pending items, enabling effective tracking of project milestones and timely completion.

    2. Productivity assessment: The data allows for the evaluation of productivity metrics such as the number of items completed per team member or the time taken to complete specific types of items. It helps assess the efficiency of individuals, teams, or the organization as a whole and identify areas for improvement.

    3. Process improvement: By examining this metric, you can identify bottlenecks, delays, or areas where items frequently get stuck. This information highlights opportunities for process improvement, such as streamlining workflows, addressing recurring issues, or implementing automation to enhance productivity and reduce turnaround times.

R&D Velocity: Understand how long work takes to get completed to improve planning and understand trends in performance.

  1. Description: Velocity is expressed as the number of user stories, features, or other units of work completed by the team during a given iteration. It helps the team understand its productivity and allows for better planning and estimation of future work. Velocity should be used as an internal team metric for planning and forecasting, rather than as a tool for comparison between teams. It provides a helpful guideline for the team's capacity and helps manage expectations for stakeholders.

  2. How is it calculated? Velocity is calculated by tracking and summing the number of fully completed user stories or features during a specific iteration or sprint. In order to calculate velocity, teams need to determine a consistent unit of measurement (user stories, features, or any similar unit), track completed work, count fully completed units, exclude incomplete work, and sum the completed units. 

  3. What questions can I answer from this data?

    1. How much work can the team consistently deliver within each iteration or sprint?

    2. How accurate are the team's initial estimations for user stories or features? 

    3.  How much work can be committed to in future iterations or sprints?

    4. How consistent is the team's delivery over time? 

  4. What should I take away from this data?

    1. Predictability and Planning: Velocity data allows teams to assess their predictability in delivering work over time. Analyzing trends and patterns in velocity helps in more accurate planning, forecasting, and setting realistic expectations for stakeholders.

    2. Process Improvement Opportunities: By analyzing velocity data, teams can identify areas for process improvement. If there are consistent fluctuations or a declining velocity trend, it may indicate inefficiencies, bottlenecks, or issues that need to be addressed within the development process. This insight can drive continuous improvement initiatives to enhance team productivity.

    3. Stakeholder Communication and Expectation Management: Velocity analysis provides a metric to communicate the team's capabilities and progress to stakeholders. It helps manage expectations by providing a data-driven basis for discussing scope, timelines, and delivery. Understanding the team's velocity enables better alignment between the team and stakeholders, fostering effective collaboration and transparency.

    4. Capacity and Workload Management: Velocity analysis helps determine the team's capacity and provides a baseline for workload management. By examining historical velocity data, teams can identify their average output and plan future iterations or sprints accordingly, ensuring a realistic and achievable workload.

Velocity: The amount of work that the team completes measures delivery throughput 

  1. Description: Velocity refers to the speed and efficiency at which a team is delivering software. It encompasses the rate at which features and updates are implemented, tested, and deployed. A higher velocity should indicate that the team is capable of delivering high quality software in a timely manner. Low velocity may suggest that there are bottlenecks or inefficiencies within the development process. 

  2. How is it calculated?: Velocity is typically measured by the amount of work completed within the specified timeframe (sprintly, monthly, etc). It is calculated by summing the story points or number of completed user stories during the specified time period. 

  3. What questions can I answer from this data?:

    1. Are there any fluctuations or trends in velocity over time?

    2. What is the impact of changes in team composition, project scope, or external factors on velocity?

    3. How accurate are the team's estimations and projections based on past velocity?

    4. Are there any correlations between velocity and other metrics, such as code quality, customer satisfaction, or defect rates?

  4. What should I take away from this data?:

    1. Team Capacity: Velocity provides insights into the team's capacity and their ability to deliver work within a specific timeframe. It helps set realistic expectations for project planning and resource allocation.

    2. Estimation Accuracy: By comparing estimated effort with actual velocity, teams can assess the accuracy of their estimation practices. This feedback loop helps refine estimation techniques and improves future planning.

    3. Project Predictability: Velocity data allows for predicting project completion dates or delivering accurate progress reports to stakeholders. It enhances communication and ensures transparency regarding project timelines and milestones.

    4. Process Improvement: Analyzing velocity trends can highlight areas for process improvement. For example, if velocity is consistently low, it may indicate the need for identifying and resolving bottlenecks, improving team collaboration, or addressing skill gaps.

 

 

Delivery 

Tracking how consistently and frequently your team is delivering items 

 

Commitment Accuracy: How well does your team forecast and deliver committed work?

  1. Description: Commitment accuracy is about accurately forecasting and delivering the committed amount of work within an iteration or sprint. It relies on effective estimation, planning, and delivering on commitments consistently. Factors like estimation skills, capacity, resources, and external influences affect commitment accuracy. Continuous improvement is pursued by learning from past iterations and adjusting practices. Accurate commitments foster trust and alignment with stakeholders, enabling better communication and expectation management. Improving commitment accuracy enhances project management, resource allocation, and overall project success.

  2. How is it calculated? Commitment accuracy is assessed qualitatively based on the team's ability to consistently meet their commitments and deliver on the agreed-upon scope and goals within an iteration or sprint. It involves comparing the actual delivered work against the committed work, taking into account any unforeseen issues or deviations that may have impacted the team's ability to meet their commitments. The focus is on evaluating the team's reliability in fulfilling their commitments rather than calculating a specific numerical value.

  3. What questions can I answer from this data?

    1. How often does the team meet their committed goals in each iteration or sprint?

    2. Are there patterns or trends in commitment accuracy over time?

    3. Are there specific types of tasks or user stories that the team consistently struggles to deliver on time?

    4. How does commitment accuracy vary across different project phases or types of work? Within different projects or teams?

    5. Are there any improvements or adjustments in the estimation or planning process that can enhance commitment accuracy?

  4. What should I take away from this data?

    1. Assessment of Reliability: Analyzing commitment accuracy data allows for an assessment of the team's reliability in meeting their commitments. It provides a measure of how often the team delivers on their promised work and helps identify any recurring patterns or issues that may be impacting commitment accuracy.

    2. Predictability and Planning: Analyzing commitment accuracy data over time helps teams assess their predictability in meeting commitments. It provides a basis for more accurate planning and forecasting, enabling teams to set realistic expectations and allocate resources effectively.

    3. Continuous Improvement: Commitment accuracy data serves as a feedback mechanism for teams to continuously improve their performance. It encourages a culture of learning, reflection, and adaptation by identifying areas for refinement, adjusting estimation techniques, and refining processes to enhance commitment accuracy.

    4. Improvement Opportunities: By analyzing commitment accuracy data, teams can identify areas for improvement in their estimation, planning, and execution processes. It highlights potential sources of inaccuracies and provides opportunities to refine practices and enhance the team's ability to accurately commit to and deliver work.

Delivery Velocity: How quickly is your team delivering completed work or features?

  1. Description: Delivery velocity refers to the rate at which a development team delivers completed work or features over a given period. It measures the efficiency and productivity of the team in terms of how quickly they can turn ideas into working software and bring value to the end-users or stakeholders.

  2. How is it calculated? Delivery velocity in software development is typically calculated by dividing the number of completed and deployed user stories, features, or units of work by the time it took to deliver them. The time taken can be measured in various units, such as weeks, sprints, or iterations. It's important to note that delivery velocity is a historical metric that is calculated retrospectively based on completed work, providing a measure of the team's past performance.

  3. What questions can I answer from this data?

    1. How consistent is the team's delivery velocity over time?

    2. What is the average number of user stories or features delivered per iteration or sprint?

    3. How does delivery velocity vary across different teams or projects within the organization?

    4. Are there specific factors or dependencies that consistently impact the team's delivery velocity?

    5. How does delivery velocity compare to initial estimations or expectations for the project?

  4. What should I take from this data?

    1. Planning and Forecasting: Analyzing delivery velocity data helps in better project planning and forecasting. It provides insights into the team's capacity and helps estimate how much work can be completed in future iterations or sprints.

    2. Project Health and Predictability: Delivery velocity data provides insights into the health and predictability of software development projects. Consistent and stable delivery velocity indicates a more predictable and manageable project, while significant fluctuations may signal potential issues or areas for further investigation.

    3. Continuous Learning and Adaptation: Delivery velocity data promotes a culture of continuous learning and adaptation. It encourages teams to reflect on their performance, identify areas for improvement, and experiment with process changes to enhance delivery efficiency.

    4. Iterative Improvement: Delivery velocity data serves as a feedback mechanism for continuous improvement. By analyzing trends and patterns, teams can identify areas for optimization, address bottlenecks, and make adjustments to their processes, leading to increased efficiency and productivity.

 

Work Patterns

Understand how and when your teams are working to balance workload and reduce burnout.

Active Contributors: The number of team members contributing to the codebase over time.

  1. Description: Active contributors are individuals who actively participate in various activities such as coding, testing, documenting, collaborating, and resolving issues. Active contributors engage in coding and development, testing and quality assurance, documentation, collaboration and communication, bug reporting and issue resolution, community engagement, and continuous learning. Their expertise and dedication are vital in delivering high-quality software solutions.

  2. How is it calculated? Calculating active contributors in software development varies depending on the tracking tools used. Approaches include analyzing commit history in version control systems, examining activity in issue tracking systems, assessing contributions on code review and collaboration platforms, and considering community engagement metrics. However, it's essential to recognize that quantitative metrics may not capture all types of contributions, and qualitative aspects like quality, leadership, mentorship, and teamwork are also important considerations.

  3. What questions can I answer from this data?

    1. Which contributors actively engage in community discussions, forums, or contribute to open-source projects?

    2. How balanced is the distribution of contributions among the contributors?

    3. Which contributors have been involved in resolving the highest number of reported issues or bug fixes?

    4. Who are the most active contributors in terms of code commits, bug fixes, or feature implementations?

    5. How has the contribution level of specific individuals or the overall contributor base changed over time?

  4. What should I take away from this data?

    1. Identifying top contributors: By analyzing this metric, you can identify the most active contributors who have made significant contributions to the project, either through code commits, bug fixes, or resolving issues. Recognizing these key contributors can help appreciate their efforts and provide them with appropriate recognition or rewards.

    2. Balancing workload: Analyzing the distribution of contributions among contributors can help identify any imbalances or disparities in workload. This insight can guide project managers or team leads in ensuring equitable distribution and avoiding burnout or over-reliance on a few individuals.

    3. Impact on software quality: Analyzing the correlation between the level of contributions and software quality or project success can provide insights into the relationship between quantity and quality. It helps project managers understand the value of active contributions in delivering a high-quality software product.

    4. Diverse expertise: Analyzing the data can reveal contributors who excel in specific areas such as coding, testing, documentation, or collaboration. This knowledge enables effective resource allocation and task assignment based on individual strengths and expertise.

Coding Days: What is the average number of workdays per week the team is actively coding

  1. Description: Coding days refers to the dedicated periods of time during which developers primarily focus on writing and implementing code for a software project. These days involve translating requirements, designs, and specifications into functional code, following teams’ standards and best practices.

  2. How is it calculated? Coding days are typically calculated based on the number of working days available within a given timeframe. It excludes non-coding activities (meetings, planning sessions, reviewing and testing periods, other administrative tasks, etc.). Typically, it is determined by subtracting the number of non-coding days or activities from the total available working days within a particular sprint, project, or iteration. 

  3. What questions can I answer from this data?:

    1. How much time is allocated for actual coding work in a given time period?

    2. Are there any patterns or trends in the distribution of coding days across different projects or teams?

    3. Are there any factors or events that affect the allocation of coding days, such as holidays, vacations, or external dependencies?

    4. How does the distribution of coding days align with the estimated effort or complexity of coding tasks?

    5. Can the data on coding days be used to identify areas for improvement, such as optimizing non-coding activities or streamlining development processes?

  4. What should I take away from this data?

    1. Coding Time Allocation: The data reveals how much dedicated time is available for developers to focus on coding activities. It helps assess whether the allocated coding days are sufficient for completing the planned work.

    2. Workload Distribution: By analyzing coding days across different projects or teams, it becomes possible to identify workload imbalances. This insight can guide resource allocation decisions, ensuring equitable distribution of coding days.

    3. Productivity Assessment: Comparing coding days to the amount of work completed helps evaluate the team's productivity and efficiency. It highlights whether the available coding days are effectively utilized or if there are potential areas for improvement.

    4. Schedule Adherence: The data on coding days helps determine if development tasks are progressing according to schedule. It allows project managers and stakeholders to assess if coding efforts are on track and take appropriate actions if there are deviations from the planned timeline.

 

Developer Activity: Tracking how many hours team members are working can ensure they are engaged but not getting burnt out 

  1. Description: Developer Activity refers to the specific time periods during which individuals are actively engaged in their development tasks. It represents the hours when contributors are actively coding, testing, debugging, collaborating, or participating in other development-related activities. Tracking active hours helps in understanding the productivity and availability of team members, facilitating effective coordination and communication within the development team.

  2. How is it calculated? Calculating active hours involves tracking time through time tracking tools, analyzing version control system activity, utilizing task or issue tracking systems, monitoring meeting and collaboration tools, examining communication and chat logs, and implementing self-reporting methods. Each approach provides insights into when individuals are actively engaged in development tasks. Ensuring clear guidelines and accurate data collection is crucial for obtaining reliable information on active hours.

  3. What questions can I answer from this data?

    1. What are the peak hours or periods of activity for the development team?

    2. Are there any variations in active hours among team members or specific roles?

    3. How does the distribution of active hours align with project milestones or deadlines?

    4. Are there any correlations between active hours and productivity or task completion rates?

    5. How do active hours align with planned or allocated effort estimates for specific tasks or projects?

  4. What should I take away from this data?

    1. Peak Productivity Hours: Identifying the peak hours of activity for the development team can help optimize scheduling and resource allocation, ensuring that critical tasks are assigned during periods of high productivity.

    2. Alignment with Milestones and Deadlines: Assessing the distribution of active hours in relation to project milestones and deadlines helps ensure that the team's efforts align with the project timeline. It facilitates proactive adjustments to ensure timely progress and manage workload distribution effectively.

    3. Productivity and Completion Rates: Examining correlations between active hours and productivity or task completion rates can reveal insights into the relationship between time spent actively on development tasks and the resulting outcomes. This understanding can inform productivity improvement strategies and help identify areas for optimization.

    4. Accuracy of Effort Estimates: Comparing active hours with planned or allocated effort estimates for specific tasks or projects helps assess the accuracy of estimation processes. It provides insights into the effectiveness of resource planning and informs future estimation and resource allocation strategies.

Overall Activity: Understanding when your team is working is key in planning new projects or understanding bandwidth. If your team is working most weekends it could really hurt morale and lead to burnout.

  1. Description:: All team coding activity refers to the collective efforts of a software development team in collaboratively writing and maintaining code for a software project. It involves multiple team members working together to contribute to the codebase and achieve project objectives. This collaborative coding activity typically includes activities such as writing new code, modifying existing code, reviewing and providing feedback on code changes, resolving code conflicts, and ensuring code quality and adherence to coding standards. 

  2. How is it calculated? Calculating all team coding activity involves analyzing data from version control systems, code contribution metrics, code review and collaboration tools, task or issue tracking systems, communication and collaboration logs, time tracking and worklogs, and code quality and defect metrics. These sources provide insights into code changes, contributions, collaboration, task completion, communication, and code quality. By combining these data points, a comprehensive understanding of team coding activity, engagement, and productivity can be obtained.

  3. What questions can I answer from this data?

    1. What is the overall level of coding activity and productivity of the team?

    2. How well and effective is the team collaborating and engaging in code reviews?

    3. Are there any bottlenecks or challenges in the coding process?

    4. Is the team meeting coding deadlines and milestones?

  4. What should I take away from this data?

    1. Team Productivity and Engagement: By assessing the volume and frequency of code changes and contributions, it becomes possible to gauge the overall productivity and engagement of the team. This information can help identify high-performing individuals and areas where additional support or resources may be needed.

    2. Workload Distribution and Bottlenecks: Analyzing task completion rates and coding activity can shed light on workload distribution within the team. It helps identify potential bottlenecks, areas of imbalance, or overburdened team members, allowing for better resource allocation and workload management.

    3. Collaboration Effectiveness: Analyzing code review and collaboration metrics helps evaluate the effectiveness of the team's collaboration efforts. It provides insights into the level of engagement, code quality, and the ability to provide constructive feedback and timely reviews.

    4. Efficiency and Time Management: Analyzing time tracking data or code completion timelines can highlight areas where the team is efficient or struggling to meet deadlines. It provides opportunities for optimizing time management practices, improving estimation accuracy, and enhancing overall project planning. 

Overtime Hours: Overtime is a major contributor to employee burnout. Tracking how many hours team members are working can ensure they are engaged but not getting burnt out so you can reduce flight risk.

  1. Description: Overtime hours worked refer to the additional hours worked by individuals beyond their regular working hours. These extra hours are typically dedicated to coding, debugging, testing, or other development-related activities. Overtime may be required to meet project deadlines, handle critical issues, or address unexpected challenges. It often involves sacrificing personal time and can lead to increased workloads and potential burnout if not managed effectively. Overtime hours in software development are tracked to ensure fair compensation and adherence to labor laws and company policies.

  2. How is it calculated? Calculating overtime hours in software development involves tracking and comparing recorded work hours with regular working hours. This can be done through time tracking systems, timesheets, time clock systems, project management tools, or employee self-reporting. It is important for organizations to establish clear guidelines and accurate tracking methods to comply with labor laws and ensure fair compensation for employees.

  3. What questions can I answer from this data?

    1. How frequently do individuals or teams work overtime? 

    2.  Does working overtime correlate with increased productivity or project progress?

    3. Are overtime hours consistently required to meet project deadlines?

    4. Are there specific individuals or teams that consistently work more overtime hours? 

  4. What should I take away from this data?

    1. Project Planning and Estimation: Consistently requiring overtime hours to meet project deadlines suggests potential issues with project planning, estimation, or resource allocation. It emphasizes the importance of improving project management practices to ensure realistic timelines and avoid excessive reliance on overtime.

    2. Productivity and Efficiency: Analyzing the relationship between overtime hours and productivity metrics can provide insights into the effectiveness of working additional hours. It helps identify whether overtime leads to improved project progress or if there are other factors affecting productivity.

    3. Workload and Capacity: Assessing overtime hours helps understand the workload of individuals or teams. It can highlight periods of high demand, identify areas of potential resource constraints, and inform resource allocation decisions.

    4. Financial Implications: Calculating the costs associated with overtime hours helps organizations evaluate the financial impact on budgets and profitability. It can inform decisions regarding resource allocation, project prioritization, and potential investments in additional resources or process improvements.

PM Activity by Day of Week: Viewing when your team is working on project management-related tasks can help you understand when people are most productive, how time zones affect your project status, and even if team members are working unhealthy schedules.

  1. Description: Analyzing project management activity by the day of the week reveals distinct patterns and behaviors. Mondays involve planning and setting priorities, while mid-week days focus on task execution and progress monitoring. Fridays are dedicated to wrapping up tasks and preparing for the next week. Weekends generally see lower project management activity, but exceptions may arise for urgent matters. By understanding these patterns, organizations can optimize resource allocation, identify coordination issues, and enhance project management practices for improved productivity and success.

  2. How is it calculated? Calculating project management activity by the day of the week involves analyzing data from various sources such as task tracking, timesheets, meeting schedules, communication logs, task status updates, and project management metrics. These data points provide insights into the frequency and intensity of project management tasks and activities throughout the week. By utilizing these approaches, organizations can gain a better understanding of how project management activities are distributed across different days, identify patterns, and optimize their project management practices.

  3. What questions can I answer from this data?

    1. Which day of the week has the highest and lowest project management activity?

    2. Are there specific days where project managers spend more time on planning and organizing tasks?

    3. Are there any patterns in task updates and progress based on the day of the week?

    4. Are there any noticeable trends or patterns in project management activity over time?

  4. What should I take away from this data?

    1. Activity Distribution: The data can reveal the distribution of project management activity across different days of the week. This can help identify which days have the highest and lowest levels of activity.

    2. Planning and Coordination: It may be observed that certain days of the week are more focused on planning, coordination, and setting priorities, while others are more centered around execution and monitoring of tasks.

    3. Impact on Project Progress: Examining project management activity by day of the week can help identify any correlation between activity levels and project progress. It can shed light on whether certain days contribute more to advancing project milestones and deliverables.