Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Code

Overview of these metrics:

  1. Active Branches
    Your open branches should represent active development efforts.

    1. Description:

      The number of active branches you have open can change based on workload, however, make sure to merge in inactive branches before they become too old, or else merging them will create additional work.

    2. How is it calculated?

      Active Branches data is calculated by looking at all branches across connected services and logging the number of days those branches have been open.

    3. What questions can I answer from this data?

      1. What development branches is my team working on?

      2. How many development branches is my team working on?

    4. What should I take away from this data?

      1. You have x number of users and more than x number of branches (with some tolerance for static branches like master). You should follow-up with users with more that once active open branch.

      2. What branches should I touch base with my team on because they've been open for longer than x days?

  2. Code Addition Index
    Understanding who is adding to the codebase and who is removing code from the codebase can be one indicator of positive refactoring or individual contributor seniority.

    1. Description:
      Different phases of a project or codebase life cycle will result in different profiles of the contributions to it. Early on, there will be a large net increase in code as base features and a lot of boilerplate code is created. Later on, as the project hits its stride, more focus will be given to refactoring and efficiency, so you may see a larger amount of code removed, or a net neutral codebase impact. Look for senior engineers removing and refactoring a lot of code. If you see this behavior from more junior engineers, it may be an indicator of growth and seniority.

    2. How is it calculated?
      This data is calculated by looking at the net lines created minus the net lines removed. If all lines touched for a day are removed, you will see a -1 for that day, if all lines were lines that were added, you will see a +1 for that day. If there were an equal number of lines added and removed, it will be 0. Note that days with no code activity will also appear as 0.

    3. What questions can I answer from this data?

      1. Are developers adding to or removing from the code base?

    4. What should I take away from this data?

      1. Does the pattern match the work that was expected? If the team is developing new features there should be an expected expansion of the code base. If the team is maintaining only then the expansion should be relatively low. If the team is refactoring you should generally see a contraction in the codebase.

      2. If certain significant additions or removals follow-up with them on initiatives/events/changes that were the cause of such a dramatic shift and ensure they match their expected work patterns.

  3. Code Duplication
    Code Duplication offers valuable insight into how much code is being re-used throughout the codebase.

    1. Description:
      Copy-Paste Duplication in a code base leads to very hard to maintain code. Not only do you need to update multiple identical sections when you're making feature updates, but it's generally harder to follow and read. While a level of this type of duplication is expected, keeping this low is a sign that you're writing high quality code.

    2. How is it calculated?
      This metric is generated by looking at the similarity between the lines of code in each commit. If we find multiple copies of each line of code in a single commit, this index will be a 100, however, if there are no lines that have duplicates [every line added in the commit is unique], this will be 0. This metric only looks at lines added, and not lines removed.

    3. What questions can I answer from this data?

      1. Are developers copying and pasting a large amount of code?

    4. What should I take away from this data?

      1. Highly duplicative codebases can be difficult to maintain and are often the source of escaped defects down the line. When significant code duplication occurs it is important to ascertain why and identify methods of reducing duplication.

      2. A key action to take when your codebase has new duplications added is to talk to developers creating the duplication about different strategies to work around duplicative code.

  4. Code Type
    Looking at Code Type, a user can understand where development effort is going and can help normalize reported productivity. Identifying what your developers are doing helps keep your team on track.

    1. Description:
      Our unique algorithm helps segment the types of code a developer is creating to help understand your team's focus. Generally, code churn is considered bad. If a developer is constantly rewriting a section of a codebase, this can be problematic. It's not always a development problem, however. This can be caused by a number of reasons and should be viewed as an indicator instead of a metric to manage by. This could be due to requirements that are in constant flux, or the developer working with a new library to get configurations correct. These are intended to be indicators that help you identify problems.

    2. How is it calculated?
      Code type consists of 6 Categories: New Work, Legacy Refactor, Help Others, Churn, Library, and Comments. Each type can help indicate the progress of the codebase

      1. New Work represents features that have been added or new functionality. This is calculated by looking at net new or net removed lines of code that do not fall into the other buckets.

      2. Legacy Refactor typically represents paying off technical debt. This is calculated by looking at code replacements that happen on code that has not been worked on in over 60 days.

      3. Help Others represents an employee working on a feature or product that may not have been theirs. This is calculated as new work that happens in a part of the code base that a user typically doesn’t work in. If the user has zero commits in a file, and the file is active, the work is marked as Help Others.

      4. Churn represents work that is going into the same spot, usually to fix bugs or rework problem code. This is calculated as code that has been rewritten across 2 or more commits in the past 30 days.

      5. Library represents the work that comes from code that seems to be imported from other sources. This is calculated by identifying large single files that are added and are over 100 lines, and there are no single line modifications to the file, and excluding them

      6. Comments represent the amount of in-code documentation. This is calculated by looking for specific markers in popular file types, and identifying block/multi-line and whole line comments. [Lines with trailing line comments are treated as code].

    3. What questions can I answer from this data?

      1. What type of code is my team writing?

      2. How is the type of code my team is writing changing over time?

      3. What type of code is my team writing? (grouped view)

    4. What should I take away from this data?

      1. Does the distribution of work map to the initiatives in place? If you are deploying new features but see a lot of “Legacy Refactor” then a conversation with the team about what they are working on is warranted.

      2. A high amount of code churn can imply a developer that in spinning their wheels or struggling, it can also imply future technical debt and code quality challenges.

  5. Commit Volume
    Commit Volume helps you understand your team's work output. While this doesn't capture everything your team does, it is a good way to identify employees that may be struggling

    1. Description:

    2. How is it calculated?

    3. What questions can I answer from this data?

      1. How is my team's commit volume changing over time?

      2. Is my team creating larger or smaller commits?

      3. How often does my team commit code?

    4. What should I take away from this data?

      1. An irregular commit volume can indicate poor developer hygiene. Best practices are to make frequent, granular commits, leading to maintainable iterative units of work.

      2. Infrequent commits are another example of poor development practices, daily commits are typically the best practice standard, also creating the ability to create iterative units of work and better capability to stay in sync with the main codebase.

      3. Large commits create challenging downstream code review problems and can be hard to integrate into the codebase. Best practices are to generate smaller, frequent commits.

  6. Commit Volume by File Type
    A codebase can consist of many different technologies or programming languages. Keeping track of the balance can be hard as your product evolves over time. Looking at the new code added by file type can help you understand what skills are needed moving forward for your team.

    1. Description:
      Most projects are made of more than a single type of code. For early-stage, monolithic projects, there can be many file types, whereas, for more mature microservices-based projects, this can become more focused on a single type of code. This data can help you understand why types of code are being created, and help you understand where to staff if you are in growth mode or when to split apart a team or codebase if your team is losing focus.

    2. How is it calculated?
      Allstacks generates this data by looking at all repositories connected to this project and scanning the names of each file in each commit to identify the filetypes. The value we use for this metric is the total number of lines changed, defined as lines_added + lines_removed for each type of file.

    3. What questions can I answer from this data?

      1. What file types are worked on the most?

    4. What should I take away from this data?

      1. This data can help you understand if the concentration of work is shifting into new technologies or if certain technologies take more work to maintain than others. It can help you make decisions on how to staff, when to refactor, and how to make investments moving forward.

  7. Commit Volume by Pull Request
    Some features are very large, and touch a lot of code. Those tend to be the riskiest, since there is such a large volume of changes. Keeping track of the scale of feature pull requests can help plan for risks

    1. Description:
      This metric is a good way to identify if you have a very large pull request that isn't getting merged. Some pull requests can linger open due to blockers and may end up being forgotten about in a process. This is a great way to keep tabs on those pull requests to prevent code rot, and eventual duplication of work.

    2. How is it calculated?
      This data is generated by looking at the number of changes in all pull requests that are not closed or merged and getting the number of changed lines in commits that are part of those pull requests.

    3. What questions can I answer from this data?

      1. How many lines of code do I have open in Pull Requests?

    4. What should I take away from this data?

      1. Identifying large Pull Requests and working with the team to get extra code review and QA performed is valuable to ensure less defects downstream. If there are too many large Pull Requests, in general, it is valuable to work with the team to create more granular commit and Pull Request processes.

Code Review
Overview of these metrics: An indicator of code quality is if new code and features are peer-reviewed. These Code Review or Pull Request Metrics can be used to spot risky code and processes.

  1. Open Pull Requests

Open pull requests represent new features or bug fixes waiting for review. Identifying aging or old pull requests can help you discover features that have been abandoned or have other issues introduced into production.

...

  1. Description:

    1. How is it calculated?

    2. How can I look at this data?

    3. What should I take away from this data?


  1. Pull Requests Merged

Understanding who is merging Pull Requests will help you understand the distribution of work.(Guessing on this one)

  1. Description:

    1. How is it calculated?

    2. How can I look at this data?

    3. What should I take away from this data?

Builds
Overview of these metrics: These build metrics are indicators of developer productivity and code quality based on the average time a build takes and the success rate by developer so you can easily spot risks and areas of improvement.

  1. Average Build Delay

The average amount of time a developer has to wait for a build to successfully run as an indicator of developer productivity.

...

Process Health
Overview of these metrics:

  1. Active Issues

Visualizing the number of issues in progress helps see the distribution of work and determine when a project will be completed.

  1. Description:

    1. How is it calculated?

    2. What questions can I answer from this data?

    3. What should I take away from this data?

...

  1. Description:

    1. How is it calculated?

    2. What questions can I answer from this data?

    3. What should I take away from this data?


  1. Burndown

The burndown chart helps you visualize how much work and what kind of work is left within a particular project so you can determine when the work will be done or where your team may need additional support.

  1. Description:

    1. How is it calculated?

    2. What questions can I answer from this data?

    3. What should I take away from this data?

...

  1. Description:

    1. How is it calculated?

    2. What questions can I answer from this data?

    3. What should I take away from this data

Team Productivity
Overview of these metrics: These metrics help you understand team performance, trends, and visibility into what is being worked on.

  1. Epic Focus

Determine which efforts are getting the most focus from the engineering team.

...