Standard Metrics For AI Collaboration

The AI Collaboration Index (ACI) gives developers, recruiters, hiring managers, and enterprises a common language for AI effectiveness and fluency.

118
ACI Score
The Orchestrator-Sprinter*
* Archetype matching requires Full Report.

One score, easily understood

  • Standardized Normalized mean 100, std dev 15
  • Universal One number, same meaning everywhere
  • Verifiable Report ID for independent confirmation
  • Actionable Track improvement over time
130 |
    |                                            •••••118
115 |                                ••••••••••••
    |                          •    •
100 |••••••••          •••••••   •••
    |        •        •
 85 |         ••••••••
    |
 70 |
    +---------------------------------------------------
                                                       
                        

// ACI: mean 100, std dev 15
//
// If you can't measure it, you can't improve it.

Developers

Measure your AI collaboration skills. Improve over time. Stand out to employers.

  1. 01 Download the ACI script
  2. 02 Run locally on your transcripts
  3. 03 Get instant ACI Score preview in CLI
  4. 04 Send zip for verified score and full report
Download Script

Hiring

Objective candidate signal. Independent standard. Compare with confidence.

  1. 01 Candidate shares Report ID
  2. 02 Verify score at acimetrics.com/verify
  3. 03 Compare candidates fairly
  4. 04 Request detailed reports
Request Sample

Enterprise

Maximize ROI on AI tooling. Set benchmarks. Drive continuous improvement.

  1. 01 Run pilot
  2. 02 Establish baselines
  3. 03 Automate
  4. 04 Track improvement
Contact Sales

// good for developers, hiring, enterprise
//
// solves: you get what you measure
//
// solves: no way to measure ROI on AI spend

Immediate scoring in your terminal

Run the script locally. Your data stays private. Get your estimated ACI Score straight away in your terminal. Share with us to get your verified score and full report. For on-premise solutions, Contact Us.

$ node aci-score.js ~/.claude/projects/my-project/

╔══════════════════════════════════════════════════════════════╗
║  AI COLLABORATION INDEX (ACI) - Estimate                     ║
║  Calculated: Jan 21, 2026                                    ║
║  Report ID: 7xK9-m2Pq-4R8t-W5nZ-cXv4-aB3y                    ║
╚══════════════════════════════════════════════════════════════╝

  ACI SCORE (estimate)    118   ███████████░░░░░

  Velocity                122   ████████████░░░░
  Accuracy                92    ██████░░░░░░░░░░
  Integration             124   ████████████░░░░
  Literacy                112   █████████░░░░░░░

──────────────────────────────────────────────────────────────
  RAW METRICS
──────────────────────────────────────────────────────────────

  Sessions:               47
  Date range:             Dec 15 – Jan 20, 2026
  Active hours:           38.4
  Edit-to-Write:          4.2:1
  Steering frequency:     6.1%
  Concurrency:            2.3x
  Deploy frequency:       5.4/day

══════════════════════════════════════════════════════════════

  Verified Score and full report with AI-powered analysis available.

  Upload Zip file now for full report?

    [Y] Yes, upload to acimetrics.com
    [N] No, maybe later

  Press Y or N: _

// runs locally, no data leaves your machine
//
// send zip file to ACI Metrics for full report

How It Works

The ACI Score incorporates four dimensions each with their own subtests and scores:

Velocity

How fast do you code, commit and ship?

  • Deploys per day
  • Time from first commit to production
  • Tasks completed per session
  • And 3 more...
Accuracy

How often are you right the first time?

  • Rollbacks and hotfixes
  • Time spent fixing vs. building
  • Accepted AI output ratio
  • And 5 more...
Integration

How embedded is AI in your workflow?

  • Parallel sessions and tasks running
  • Task types covered (bugs, features, refactors)
  • Healthy consumption patterns over time
  • And 4 more...
Literacy

How effective is your collaboration style?

  • Edits vs. full rewrites
  • Course corrections per task
  • Message efficiency
  • And 6 more...

We measure each dimension in three passes, combining deterministic analysis with AI interpretation:

1

Script Analysis

Proprietary parsing scripts extract structured data from raw transcripts, git logs, and deploy records. Deterministic, reproducible, fast.

2

Normed Benchmarking

Your metrics compared against our normed population. This turns raw numbers into percentiles and enables IQ-style scoring.

3

AI Interpretation

Our standardized AI workflows, agents and prompts add context: content complexity, task lifecycle detection, improvement work-ons, archetype matching.

To give it a try, follow this workflow:

  1. Download Script
    Get the ACI scoring script from our repository Contact Us to request the script for now.
  2. Run Locally
    Execute on your local transcripts — your data stays private
    (Only compatible with Claude Code transcripts at this time)
  3. View ACI Score
    Estimated ACI Score with pillar breakdown right in your terminal
  4. Send Zip
    Upload anonymized data for full analysis and verification
  5. View Full Report
    AI-powered insights, personalized feedback and work-ons
  6. Share
    Share your unique Report ID — they verify at acimetrics.com/verify

// download → run → score → send → report → share

The Full Report

The full report is available when you share the Zip file generated by the script you run locally. In addition to the verified score, the AI-assisted analysis provides the following reports.

Executive Summary

Your collaboration style in 3-5 bullets. Clear characterization for self-awareness or sharing with employers.

Pillar Deep-Dive

What's driving each score. Where you excel, where you're leaving performance on the table.

Pattern Detection

Behavioral signatures identified: parallel orchestration, visual verification, terse steering, flow states.

Task Analysis

Every task scored for complexity. Performance adjusted for difficulty. See how you handle routine fixes vs. novel implementations.

Collaboration Style

Your MBTI-style profile matching. Are you an Orchestrator, Refiner, Sprinter, or Explorer?

Work-Ons

Personalized coaching recommendations. Concrete actions to improve each pillar based on your patterns.

// Verified score and full report available when you share your Zip file

Developers

Measure your skills. Get actionable work-ons. Improve over time.

Download Script

Hiring

Objective assessment criteria. Compare and choose with confidence.

Request Sample

Enterprise

Set benchmarks. Drive improvement. Maximize ROI

Contact Sales

// let's get going