Skip to main content
AI Readiness is a repository-level assessment. You get a score, status, factor breakdown, and prioritized recommendations for each repository. Propel stores one readiness snapshot per company and repository.

What AI readiness measures

AI Readiness measures how prepared your repository is for reliable AI-assisted engineering work. The current rubric version is v1. The rubric evaluates:
  • Agent instructions and reusable rules
  • Documentation depth and clarity
  • Project structure and discoverability
  • Build and development consistency
  • Validation quality (CI, tests, lint)
  • Automation and governance workflows

Assessment lifecycle

StatusMeaning
not_availablePropel cannot score the repository yet.
computingPropel is currently computing a new assessment.
readyA current score is available.
staleA score exists but is older than 14 days.
failedThe assessment job failed.
If a score is ready and computed_at is older than 14 days, Propel surfaces status as stale.

Not available reasons

ReasonMeaning
not_assessedNo assessment has been run yet.
no_active_codebaseThe repository is not attached to an active codebase.
no_repository_dataRepository structure data is missing.
empty_repositoryNo source files were found.
assessment_errorAssessment failed internally.

Scoring algorithm

The rubric has 33 pass or fail criteria across levels 1 through 5. Propel computes score as:
score = round((passed_criteria / total_criteria) * 100)
score is clamped to 0..100
Propel also computes maturity level:
  • Level 1 to 5 are evaluated in order.
  • A level is achieved when at least 80% of that level’s criteria pass.
  • The first level below 80% stops progression.
  • next_level_progress is pass percent for the next level.

Rubric definition (v1)

Level 1 baseline

  • Primary instruction file exists: AGENTS.md, CLAUDE.md, or CODEX.md.
  • Reusable rules exist in .propel/rules, .cursor/rules, .claude/rules, .codex/rules, or .cursorrules.
  • A README exists.
  • At least one standard layout directory exists (src, app, cmd, pkg, internal, services).
  • A developer entrypoint exists (Makefile, Justfile, .devcontainer, or CI config).
  • Lint or static analysis config exists.

Level 2 team readiness

  • At least one nested rules file exists (for example .propel/rules/api/...).
  • CONTRIBUTING guidance exists.
  • Test files exist.
  • CI pipeline config exists.
  • PR validation workflow exists.
  • Environment template exists (for example .env.example).

Level 3 scalable engineering

  • Architecture or ADR documentation exists.
  • Docs depth is at least 3 files.
  • Standard layout coverage is at least 2 directories.
  • Max directory depth is between 2 and 12.
  • Test framework config types are at most 2.
  • Dependency update automation exists.
  • Autofix automation exists.
  • Security automation exists.
  • Secret scan automation exists.
  • Stale work automation exists.

Level 4 governance and consistency

  • Code review workflow is active.
  • CODEOWNERS enforcement exists (CODEOWNERS plus PR validation or code review automation).
  • Docs depth is at least 8 files.
  • JavaScript lockfile strategy is consistent (at most 1 lockfile type).
  • Python tooling formats are bounded (at most 2).
  • Language ecosystem count is bounded (at most 4).

Level 5 mature operating model

  • Nested rule files are at least 4.
  • Docs depth is at least 20 files.
  • Standard layout coverage is at least 3 directories.
  • CI, tests, and lint are all present.
  • Tooling fragmentation is low: js_lockfile_kinds <= 1 and test_framework_config_kinds <= 2 and java_build_system_kinds <= 1 and python_tooling_kinds <= 2.

Factors and recommendations

Propel groups criteria into pillar factors. Each factor returns:
  • score: passed criteria in that pillar
  • max_score: total criteria in that pillar
  • evidence: proof points from your repository
  • recommendations: practical improvements for that pillar
You also get top_recommendations, a prioritized set of next actions. Propel prioritizes recommendations from your next maturity level first.

Next steps

  1. Open Codebase and click AI Readiness on a repository.
  2. Click Refresh Assessment to compute the latest score.
  3. Apply top recommendations and refresh again to track improvement.