Tools Connect: Anthropic (Beta)

Tools Connect: Anthropic (Beta)

Allstacks now supports a Beta integration with Anthropic, these metrics allow teams to monitor performance, usage, and cost of Anthropic services.


Important Notes

  • Beta Status: This integration is in Beta and hidden behind a feature flag. Please contact your Allstacks Customer Success Manager (CSM) representative to enable it.

  • Active User: Data presented in Allstacks will be from the last 90 days.

  • Live Data: Metrics refresh every time you reload the page, as fresh data is fetched on each load.

  • Versioning: If you will be changing versions, it will require you to reconfigure the connection with a new Admin Key & Version value.


Prerequisites

You will need to:


How We Connect

  1. Once you have the feature flagged enabled for the Anthropic Beta by your Customer Success Manager you should see Anthropic Usage Metrics become available in your Configure section to navigate to.

  1. Once on the Anthropic Usage Metrics dashboard page, top-right, click the Configure button. You will receive a pop-up window which will allow you to enter in your Admin API key and Anthropic Version, save it to start pulling in data.

     

    image-20251010-181527.png

We use the following endpoint to pull data which you can also validate to ensure Allstacks is representing the same data you are seeing in Anthropic.


Metrics We Show from Anthropic

Once configuration is completed, Allstacks displays the following metrics sourced from Anthropic:

image-20251010-181427.png

 

1. Total Active Users

Number of users active in the past 90 days.
Why it matters: Helps measure adoption and ongoing engagement across the organization.

2. Total Lines of Code Accepted

Total lines of code added and removed in the last 90 days.
Why it matters: Indicates overall coding activity and developer productivity trends.

3. Total Spend

Estimated Total USD spent in the past 90 days.
Why it matters: Tracks cost efficiency and helps identify opportunities to optimize usage.

4. Cost Over Time

Daily cost trends across services and models.
Why it matters: Reveals usage spikes, spending patterns, and cost-saving opportunities.
Note: Values under $0.01 display as zero.

5. Lines of Code Over Time

Daily lines of code added and removed.
Why it matters: Shows momentum in development and how AI assistance impacts coding speed.

6. User Activity Over Time

Daily active users and usage patterns.
Why it matters: Highlights engagement trends and helps spot drops in adoption early.

7. Model Usage Distribution

Bar chart of total cost by AI model, with dynamic colors per model type.
Why it matters: Identifies which AI models drive the most cost and usage for better resource planning.

8. Token Usage Over Time (by Model)

Token usage trends across Claude models, grouped by model type.
Why it matters: Tracks model efficiency and helps align token use with cost and performance goals.

9. Tool Actions Distribution

Daily accepted and rejected AI tool actions.
Why it matters: Shows how often AI suggestions are accepted, helping evaluate model accuracy and value.

10. User Usage Breakdown

Detailed usage and cost stats for each active user.
Why it matters: Provides visibility into top contributors, power users, and potential training needs.