Methodology

How we compute your persona

Vibe Coding Profile infers your Vibe Coding persona by spotting AI-assisted engineering patterns in the Git history of repos you connect.

We do not use your prompts, IDE workflow, PR comments, or any private chats. We also do not read your code content. We only use Git/PR metadata that helps us infer patterns.

1) What we look at (and what we don’t)

  • Commit metadata: timestamps, files changed, additions/deletions.
  • Commit message subjects: lightweight patterns like feat/fix/test/docs.
  • Changed file paths when available (to infer which subsystems changed together).
  • PR metadata when available: changed-files counts, issue linking, checklists.

In Phase 0 (GitHub API), we analyze a time-distributed sample of commits per repo (up to 300) to better reflect long-lived evolution without pulling the entire history.

We do not read code content or prompts. Any “AI-assisted” language here is an inference from Git/PR patterns, not proof.

2) The six axes (A–F)

Each axis is a 0–100 score computed from simple, deterministic signals. Higher scores mean the pattern shows up more often in your history.

A

Automation

Large, wide changes: high files-changed per commit, big commits, big PRs.

B

Guardrails

Safety signals: tests/docs/CI showing up early, plus checklists and hygiene commits.

C

Iteration

Fast feedback loops: fix-after-feature sequences, high fix ratio, fix-heavy sessions.

D

Planning

Up-front structure: conventional commits, issue-linked PRs, docs before features.

E

Surface Area

Breadth across subsystems: how many areas (ui/api/db/infra/tests/docs) change together.

F

Rhythm

Shipping cadence: burstiness and how big your typical work sessions look.

These axes are designed to reflect how AI-assisted engineering often shows up in Git: a bias toward bigger generated chunks (A), stronger test/checklist habits to stay safe (B), rapid fix cycles while iterating (C), structured progress signals (D), broader cross-area edits (E), and bursty “session” work patterns (F).

3) Persona selection

Each persona is defined by a small set of thresholds on the axes (e.g., “A ≥ 70” and “D < 45”). We select:

  • A strict match if a persona’s full rule set is satisfied.
  • Otherwise, a nearest-fit match if you satisfy enough of a persona’s conditions.

The “Matched signals” list in your profile shows the exact thresholds that were used.

4) Score and confidence

  • Persona score is a 0–100 match score derived from the axes involved in the persona’s rule.
  • Confidence is a separate label based on coverage and data quality (more repos + more commits usually increases it).

Your profile is aggregated across repos using commit-weighted averaging, so repos with more commits influence your persona more.

5) Why it can be wrong

  • GitHub only shows what’s pushed; local work and private repos may be missing.
  • Some repos have incomplete metadata (e.g., missing file paths).
  • Some insights are based on a representative sample of commits, so rare patterns may be missed in very large histories.
  • We can’t see how you collaborate with an agent in your editor (prompts, iterations, copy/paste, refactors between commits), so we infer from what lands in Git.
  • Different projects can pull you into different modes; aggregation may “average you out”.