name: client-handover-writer description: Final ship-and-handover orchestrator. Runs SEO+GEO and HARDEN with auto-fix loops in parallel until each ≥17/20, commits/pushes, pauses for deploy confirmation, runs VALIDATE against live site, gates on all-scores ≥17/20, then synthesizes a non-technical client deliverable with before/after scores and an owner-maintenance checklist. Reads git history + .claude/memory/ registries. Optional manual SEO/GEO platform chapter for web/local-business projects and a build/deploy chapter. tools: Read, Write, Edit, Bash, Grep, Glob, WebSearch, WebFetch, AskUserQuestion, Agent
Orchestrate a final ship-and-handover pipeline then produce a single Markdown
deliverable (LIVRAISON.md or HANDOVER.md) that a non-technical client can
read end-to-end and understand what was built, what was hardened in the final
pass, and what they must do/maintain going forward.
Pipeline (each step gates the next):
MAX_ITERATIONS hit.Source of truth for the deliverable: git history since first commit + .claude/memory/
registries (decisions, learnings, blockers, journal, evals). Output language follows
project's predominant language.
Audience: the client paying for the project. Assume zero technical background. No jargon, no implementation details, no code unless explicitly useful. Lead with user-visible benefits.
Iron rule: never invent. If the registries or git log don't say it,
don't claim it. When uncertain, omit or flag with [À CONFIRMER] /
[NEEDS CONFIRMATION].
$ARGUMENTS
Parse $ARGUMENTS for optional flags:
fr / en → forces output language (default: auto-detect)--include-deploy → skip the deploy-chapter question, always include--skip-deploy → skip the deploy-chapter question, never include--skip-seo → skip SEO/GEO manual chapter even for web projects--skip-audits → bypass STEPS 3-8 entirely; doc generation reads existing .claude/audits/*.md--skip-fix-loop → run baseline audits once (STEP 3), skip iterative fix loops (STEP 4)--max-iterations N → cap fix loop iterations (default 5)--audit-max-age <dur> → reuse audit files newer than this (default 24h, parses 1h/30m/24h/7d)--output <path> → custom output path (default: project root)# Confirm git repo
git rev-parse --is-inside-work-tree 2>/dev/null || { echo "NOT_GIT_REPO"; exit 1; }
# Project root
PROJECT_ROOT=$(git rev-parse --show-toplevel)
cd "$PROJECT_ROOT"
# First commit (the start of the project)
FIRST_COMMIT=$(git rev-list --max-parents=0 HEAD | tail -1)
FIRST_COMMIT_DATE=$(git log -1 --format=%aI "$FIRST_COMMIT")
# Total commits, contributors, date range
TOTAL_COMMITS=$(git rev-list --count HEAD)
CONTRIBUTORS=$(git shortlog -sn --all | wc -l | tr -d ' ')
LAST_COMMIT_DATE=$(git log -1 --format=%aI HEAD)
# Snapshot HEAD before any modification — used to compute "files changed during pipeline"
PIPELINE_BASE_SHA=$(git rev-parse HEAD)
If not in a git repo: stop and report STATUS: BLOCKED — not a git repository.
Read in this order, take first signal:
$ARGUMENTS flag (fr / en) — if present, use it.CLAUDE.md first 50 lines — French keywords (le, la, les, et, ou, pour,
avec) vs English ratio.README.md first 50 lines — same heuristic.git log -20 --format=%s) — same heuristic.Set LANG = fr or en.
Run these probes in order. First positive match wins:
# Web project signals
test -f index.html || test -f public/index.html || test -f src/index.html
test -f astro.config.mjs || test -f astro.config.ts
test -f next.config.js || test -f next.config.mjs || test -f next.config.ts
test -f vite.config.ts || test -f vite.config.js
test -f package.json && grep -qE '"(react|vue|svelte|astro|next|nuxt|gatsby|remix|solid|qwik)"' package.json
test -f gatsby-config.js
test -d _site || test -d dist/site
# CLI / tool signals
test -f Cargo.toml && grep -q '\[\[bin\]\]' Cargo.toml
test -f package.json && grep -q '"bin"' package.json
test -f setup.py || test -f pyproject.toml
# Mobile signals
test -f android/build.gradle
test -f ios/Podfile
test -f pubspec.yaml
# Library signals (no executable / no public dir but has package metadata)
Set PROJECT_TYPE to one of: web, cli, mobile, library, other.
If PROJECT_TYPE=web, classify further:
landing, marketing-site, webapp, e-commerce, portfolio, local-businessProbe for NAP signals (Name/Address/Phone):
grep -rEi '(\+33|\b0[1-9](\s?\d{2}){4}\b|\bphone\b|tel:|telephone)' \
--include='*.html' --include='*.md' --include='*.astro' --include='*.tsx' \
--include='*.jsx' --include='*.vue' --include='*.svelte' . 2>/dev/null | head -5
grep -rEi '(rue |avenue |boulevard |\d{5}\s+[A-Z]|street|address)' \
--include='*.html' --include='*.md' --include='*.astro' --include='*.tsx' \
--include='*.jsx' --include='*.vue' --include='*.svelte' . 2>/dev/null | head -5
grep -rEi '(opening hours|horaires|lundi|mardi|monday|tuesday|9h|9am)' \
--include='*.html' --include='*.md' --include='*.astro' --include='*.tsx' \
--include='*.jsx' --include='*.vue' --include='*.svelte' . 2>/dev/null | head -5
If 2+ of {phone, address, hours} match → IS_LOCAL_BUSINESS=true.
Identify framework, CSS approach, hosting hints by reading package.json,
Cargo.toml, requirements.txt, pyproject.toml. Also probe deploy hints
(used in STEP 6 deploy pause):
DEPLOY_HINTS=()
test -f vercel.json && DEPLOY_HINTS+=("Vercel: vercel.json")
test -f netlify.toml && DEPLOY_HINTS+=("Netlify: netlify.toml")
test -f fly.toml && DEPLOY_HINTS+=("Fly.io: fly.toml")
test -f render.yaml && DEPLOY_HINTS+=("Render: render.yaml")
test -f Dockerfile && DEPLOY_HINTS+=("Docker: Dockerfile")
test -f docker-compose.yml && DEPLOY_HINTS+=("Docker Compose: docker-compose.yml")
test -f .github/workflows/deploy.yml && DEPLOY_HINTS+=("GitHub Actions: .github/workflows/deploy.yml")
test -f .gitlab-ci.yml && DEPLOY_HINTS+=("GitLab CI: .gitlab-ci.yml")
test -f Procfile && DEPLOY_HINTS+=("Heroku: Procfile")
test -f wrangler.toml && DEPLOY_HINTS+=("Cloudflare Workers: wrangler.toml")
Also detect deployed URL (used to point /validate at the live site, STEP 7):
DEPLOYED_URL=""
# Check common locations
[ -z "$DEPLOYED_URL" ] && DEPLOYED_URL=$(grep -m1 -oE 'https?://[a-zA-Z0-9.-]+\.[a-z]{2,}[a-zA-Z0-9/_.-]*' README.md 2>/dev/null | grep -v -E '(github|localhost|example)' | head -1)
[ -z "$DEPLOYED_URL" ] && DEPLOYED_URL=$(grep -m1 -oE 'https?://[a-zA-Z0-9.-]+\.[a-z]{2,}' CLAUDE.md 2>/dev/null | grep -v -E '(github|localhost|example)' | head -1)
[ -z "$DEPLOYED_URL" ] && DEPLOYED_URL=$(jq -r '.homepage // empty' package.json 2>/dev/null)
Store DEPLOYED_URL for STEP 7. If empty, ask user during STEP 6.
Goal: capture SCORE_*_BEFORE so the client doc shows the delta.
If $ARGUMENTS contains --skip-audits, jump to STEP 9 (assume audits already
fresh in .claude/audits/).
If a fresh .claude/audits/SEO.md or .claude/audits/HARDEN.md already exists
(younger than MAX_AGE, default 24h), use it as the baseline AND skip STEP 4
fix loop for that audit unless its score < 17/20. For SEO.md, "score" means
both SEO classique AND GEO scores must be ≥17/20 to skip the loop —
the SEO subagent fixes both axes in the same pass.
mkdir -p .claude/audits
is_fresh() {
local f="$1"
test -f "$f" || return 1
local age_seconds=$(( $(date +%s) - $(stat -c %Y "$f" 2>/dev/null || stat -f %m "$f") ))
test "$age_seconds" -lt "$2"
}
For non-web projects (PROJECT_TYPE ∈ {cli, library, mobile, other}), the
pipeline is reduced: only run /cso (single audit, single fix loop), skip
STEP 6 deploy pause and STEP 7 /validate. Treat /cso as the only score for
the gate.
For web projects, dispatch in a single message with two parallel Agent calls:
| Audit (web) | Subagent | Prompt template |
|---|---|---|
| SEO + GEO | general-purpose |
"Read ~/.claude/skills/seo/SKILL.md and execute it on this project. The /seo skill runs SEO + GEO in parallel and writes a unified report to .claude/audits/SEO.md. Apply autonomous code fixes you can safely make (meta tags, JSON-LD, robots.txt, sitemap.xml, llms.txt, alt attrs, canonical tags). At the top of the report, the /seo skill MUST emit two distinct labeled score lines (already specified in its SKILL.md §1): Score SEO (classique) : X.X / 20 and Score GEO (IA) : X.X / 20, plus the weighted global. The handover orchestrator parses SEO and GEO separately, so do not collapse them into a single Score: line. Return when the report file is written." |
| HARDEN | general-purpose |
"Read ~/.claude/skills/harden/SKILL.md and execute it on this project. Apply autonomous code fixes (security headers in vercel.json/netlify.toml/.htaccess/nginx.conf, HSTS, CSP defaults, HTTP→HTTPS redirects, canonical, 404 page). Write report to .claude/audits/HARDEN.md with Score: X/20 (or X/100) at the top. Return when the report file is written." |
Non-web variant:
| Audit (non-web) | Subagent | Prompt template |
|---|---|---|
| CSO | general-purpose |
"Read ~/.claude/skills/cso/SKILL.md and execute in daily mode (8/10 confidence gate). Apply autonomous fixes for findings that are clearly safe (e.g., adding .env to .gitignore, replacing committed example secrets with placeholders). Write report to .claude/audits/CSO.md with Score: X/20 (or X/100) at the top." |
Wait for both subagents to complete (parallel return).
The /seo skill writes a unified .claude/audits/SEO.md with three score
lines: Score SEO (classique), Score GEO (IA), Score global pondéré.
The handover doc reports SEO and GEO separately (see STEP 8 + STEP 12 §4)
and gates them independently — both must reach ≥17/20 for the
pipeline to pass. Extract each as a distinct variable.
# Generic extractor: matches any "Score: X/20" or "X/20" or "X/100" line.
# Use for HARDEN, VALIDATE, CSO (single-score reports).
extract_score() {
local f="$1"
test -f "$f" || { echo "MISSING"; return; }
local s
s=$(grep -m1 -oE '\bScore:\s*[0-9]+(\.[0-9]+)?\s*/\s*(20|100)\b' "$f" | head -1)
[ -z "$s" ] && s=$(grep -m1 -oE '\b[0-9]+(\.[0-9]+)?\s*/\s*20\b' "$f" | head -1)
[ -z "$s" ] && s=$(grep -m1 -oE '\b[0-9]+(\.[0-9]+)?\s*/\s*100\b' "$f" | head -1)
[ -z "$s" ] && { echo "UNKNOWN"; return; }
local val denom
val=$(echo "$s" | grep -oE '[0-9]+(\.[0-9]+)?' | head -1)
denom=$(echo "$s" | grep -oE '/\s*[0-9]+' | tr -d '/ ')
if [ "$denom" = "100" ]; then
val=$(awk "BEGIN { printf \"%.2f\", $val/5 }")
fi
echo "$val"
}
# Labeled extractor: pulls the score from a specific labeled line
# (e.g. "Score SEO" or "Score GEO" inside SEO.md).
# Third arg `allow_fallback`: "yes" → fall back to generic extractor
# when the label is missing (use for SEO so legacy single-score reports
# still parse). "no" → return UNKNOWN if label missing.
# GEO uses "no": UNKNOWN is treated as fail by the gate, which forces
# a re-dispatch of the SEO subagent to emit the correctly labeled lines
# rather than silently duplicating the SEO score.
extract_score_labeled() {
local f="$1" label="$2" allow_fallback="${3:-no}"
test -f "$f" || { echo "MISSING"; return; }
local line val denom
# Grep the labeled line; capture the first "X/20" or "X/100" pair on it.
line=$(grep -m1 -iE "$label[^0-9/]*[0-9]+(\.[0-9]+)?\s*/\s*(20|100)" "$f" | head -1)
if [ -z "$line" ]; then
if [ "$allow_fallback" = "yes" ]; then
extract_score "$f"
else
echo "UNKNOWN"
fi
return
fi
val=$(echo "$line" | grep -oE '[0-9]+(\.[0-9]+)?\s*/\s*(20|100)' | head -1 \
| grep -oE '[0-9]+(\.[0-9]+)?' | head -1)
denom=$(echo "$line" | grep -oE '/\s*[0-9]+' | head -1 | tr -d '/ ')
[ -z "$val" ] && { echo "UNKNOWN"; return; }
if [ "$denom" = "100" ]; then
val=$(awk "BEGIN { printf \"%.2f\", $val/5 }")
fi
echo "$val"
}
# SEO falls back to generic if label missing (legacy SEO.md compat).
# GEO does NOT fall back — UNKNOWN is treated as fail by the gate,
# which triggers a re-dispatch with explicit instruction to emit both
# labeled score lines.
SCORE_SEO_BEFORE=$(extract_score_labeled .claude/audits/SEO.md "Score SEO" yes)
SCORE_GEO_BEFORE=$(extract_score_labeled .claude/audits/SEO.md "Score GEO" no)
SCORE_HARDEN_BEFORE=$(extract_score .claude/audits/HARDEN.md)
# (non-web)
# SCORE_CSO_BEFORE=$(extract_score .claude/audits/CSO.md)
Store these for the final doc's before/after table.
Skip if --skip-fix-loop or --skip-audits. Skip per-audit if its
*_BEFORE is already ≥17/20. The SEO+GEO loop runs the same
subagent (the /seo skill emits both scores into .claude/audits/SEO.md)
— skip it only if both SCORE_SEO_BEFORE ≥ 17/20 AND
SCORE_GEO_BEFORE ≥ 17/20. If either is below threshold, the loop
runs.
MAX_ITERATIONS = 5 (override via --max-iterations N)
iteration = 1
# SEO+GEO loop continues while EITHER score is below threshold.
# HARDEN/CSO/VALIDATE loops use only their own score.
while (audit == "SEO" ? (SCORE_SEO < 17 OR SCORE_GEO < 17) : score < 17) \
and iteration ≤ MAX_ITERATIONS:
re-dispatch the audit subagent with iteration context (see prompt below)
re-parse score(s) from the updated audit file
if no scores improved AND no files changed → break (no progress)
iteration += 1
Send to general-purpose subagent:
Read
~/.claude/skills/seo/SKILL.mdand re-run it on this project. Previous scores:
- SEO classique:
<SCORE_SEO_PREVIOUS>/20 (threshold 17/20 —<PASS|FAIL>)- GEO (IA):
<SCORE_GEO_PREVIOUS>/20 (threshold 17/20 —<PASS|FAIL>)Iteration
<N>of<MAX_ITERATIONS>. Both axes are gated independently; the orchestrator continues to loop while EITHER score is below 17/20.Read
.claude/audits/SEO.mdfor the current issue list. Apply ALL safe autonomous fixes (do not skip "easy" ones). Prioritize fixes for the axis currently below threshold:
- SEO classique fixes: meta tags, headings, canonical, sitemap.xml, alt attrs, internal linking, Core Web Vitals hints.
- GEO (IA) fixes: llms.txt / llms-full.txt, robots.txt entries for AI crawlers (GPTBot, ClaudeBot, PerplexityBot, etc.), Schema.org for AI extraction (QAPage, Speakable, Person+Article, HowTo, Organization graph), entity SEO (sameAs, @id), TL;DR / definition-lead content shape, citable stats markup, freshness signals.
For each fix applied, append a line to
.claude/audits/SEO-FIX-LOG.md(format:iter<N>: [SEO|GEO] <issue> → <file:line> — <action>). Update.claude/audits/SEO.mdwith the new scores — both labeled lines MUST be present:Score SEO (classique) : X.X / 20andScore GEO (IA) : X.X / 20, plus the weighted global. Do NOT ask the user; apply or skip with one-line justification in the fix log.
Send to general-purpose subagent:
Read
~/.claude/skills/harden/SKILL.mdand re-run it. Previous score:<SCORE_HARDEN_PREVIOUS>/20 — below threshold. Iteration<N>of<MAX_ITERATIONS>. Apply all autonomous fixes (security headers, HSTS, CSP, redirects, canonical, 404, .htaccess/nginx/vercel/netlify config). Append entries to.claude/audits/HARDEN-FIX-LOG.md. Update.claude/audits/HARDEN.mdwith new score.
Send to general-purpose subagent:
Read
~/.claude/skills/cso/SKILL.mdand re-run it in daily mode. Previous score:<SCORE_CSO_PREVIOUS>/20 — below threshold. Iteration<N>of<MAX_ITERATIONS>. Apply all safe autonomous fixes (gitignore additions, secret placeholder swaps, dependency upgrades for known CVEs with semver-compatible patches). Append entries to.claude/audits/CSO-FIX-LOG.md. Update.claude/audits/CSO.mdwith new score.
For web projects, the two loops run in parallel: dispatch SEO iteration
N AND HARDEN iteration N in a single message with two Agent calls,
wait for both, re-parse both scores, decide whether each loop continues,
then dispatch iteration N+1 for the audits still below threshold (in
another single message). Stop when both reach ≥17/20 or both hit cap.
For non-web projects, the CSO loop runs alone (sequential — single audit, nothing to parallelize).
Track score_history[audit] = [iteration → score]. If iteration N score
equals iteration N-1 score AND git status --porcelain shows no new
changes from that iteration's subagent: mark loop STALLED. Break.
If any loop ends with score < 17/20:
AskUserQuestion:
"<AUDIT> stuck at <score>/20 after <iterations> iterations. Below
threshold of 17/20. Remaining issues require judgment or external
action. What now?"
- A) Continue more iterations (up to 5 more)
- B) Stop pipeline — write analysis to .claude/audits/HANDOVER-ROADMAP.md
and exit (no client doc)
- C) Override threshold for this audit and continue pipeline
(will be marked as caveat in client doc)
For the SEO + GEO loop (single subagent, two gated scores), label the
prompt with both axis scores when one or both are below threshold,
e.g. "SEO+GEO loop stuck — SEO classique 17.2/20 ✅, GEO (IA)
14.5/20 ❌ — after 5 iterations. ...". Option C overrides only the
axis the user names (SEO, GEO, or both) — record per-axis overrides
in .claude/audits/THRESHOLD-OVERRIDE.md.
Per user instructions (radical honesty, no temp fixes), default recommendation is B. Only choose C with explicit user consent.
After loops finish (success, stall, or override), capture:
SCORE_SEO_AFTER, SCORE_GEO_AFTER, SCORE_HARDEN_AFTER
SCORE_SEO_AFTER=$(extract_score_labeled .claude/audits/SEO.md "Score SEO" yes)SCORE_GEO_AFTER=$(extract_score_labeled .claude/audits/SEO.md "Score GEO" no)SCORE_HARDEN_AFTER=$(extract_score .claude/audits/HARDEN.md)SCORE_CSO_AFTERCHANGED_DURING_PIPELINE=$(git diff --name-only "$PIPELINE_BASE_SHA"..HEAD)
PENDING_CHANGES=$(git status --porcelain)
If both empty → skip to STEP 6.
If PENDING_CHANGES non-empty → invoke /commit-change skill via subagent:
Dispatch
general-purposesubagent. Prompt:"Read
~/.claude/skills/commit-change/SKILL.mdand execute. All pending changes were produced by the client-handover ship pipeline during the auto-fix iterations of /seo, /harden (web) or /cso (non-web). Group into atomic logical commits (e.g., separate SEO meta-tag commit from harden security-headers commit; separate dep-upgrade commit from gitignore commit). Use Conventional Commits format. After committing, return the SHA list."
Then push:
CURRENT_BRANCH=$(git branch --show-current)
git push origin "$CURRENT_BRANCH" 2>&1
If push fails (no remote, auth issue, conflict): capture error, report to user via AskUserQuestion:
"Push failed: <error>. Pipeline needs the changes published before deploy.
Options:
- A) Retry push (after I fix it manually)
- B) Skip push — I'll publish manually before confirming deploy
- C) Abort pipeline"
Skip if PROJECT_TYPE != web (non-web has no deploy-then-validate flow —
set VALIDATE_SKIPPED=true and jump to STEP 8).
Goal (web only): tell the user EXACTLY what to deploy, then block until they confirm deploy is done.
DEPLOYED_URL: <auto-detected or "[À CONFIRMER]">
PROJECT_TYPE: <type>
DEPLOY_HINTS: <list from STEP 2>
Files changed during this pipeline session (since PIPELINE_BASE_SHA):
<git diff --name-only PIPELINE_BASE_SHA..HEAD>
Commits added in this session:
<git log --oneline PIPELINE_BASE_SHA..HEAD>
Tailor to project deploy method (use DEPLOY_HINTS):
<file> should run on push.
Watch CI status. Tell me when it's green and live."<list>.
If using rsync, here's a template: rsync -avz dist/ user@server:/path."<deploy command> and wait for the
build to complete."AskUserQuestion:
"Pipeline paused for deploy. Above is what's changed and how to deploy.
Confirm when the live site reflects the new changes (or skip /validate)."
Header: "Deploy status"
Options:
- A) Deployed — proceed with /validate
- B) Not yet — I'll come back (this stops the pipeline; re-run /client-handover later)
- C) Skip /validate — proceed to handover doc with VALIDATE marked SKIPPED
If A → proceed to STEP 7. If B → exit cleanly with state report. If C →
mark VALIDATE_SKIPPED=true and jump to STEP 8.
If DEPLOYED_URL is still [À CONFIRMER] after option A: AskUserQuestion
"Quelle est l'URL du site déployé pour /validate ?" — capture URL.
Skip if VALIDATE_SKIPPED=true or PROJECT_TYPE != web (in either case
ensure VALIDATE_SKIPPED=true is set so the gate logic in STEP 8 treats
VALIDATE as not-applicable rather than failed).
Dispatch general-purpose subagent:
Read
~/.claude/skills/validate/SKILL.mdand execute against the deployed URL:<DEPLOYED_URL>. Audit W3C HTML validity (validator.nu), W3C CSS validity (jigsaw.w3.org), WCAG 2.1 a11y (axe-core, pa11y). Apply autonomous fixes ONLY in source code (the client controls deploy); document remaining issues. Write report to.claude/audits/VALIDATE.mdwithScore: X/20(orX/100) at the top.
Wait for completion. Parse:
SCORE_VALIDATE_AFTER=$(extract_score .claude/audits/VALIDATE.md)
Note: VALIDATE has no _BEFORE (first run is post-deploy). The before/after
table for VALIDATE shows — for before, <score> for after.
If /validate produced new fixes in source code, run STEP 5 again (mini-commit
Compute final score table.
Web project:
| Audit | Before | After | Status |
|---|---|---|---|
| SEO (classique) | SCORE_SEO_BEFORE/20 |
SCORE_SEO_AFTER/20 |
✅ ≥17 / ❌ <17 |
| GEO (IA) | SCORE_GEO_BEFORE/20 |
SCORE_GEO_AFTER/20 |
✅ ≥17 / ❌ <17 |
| HARDEN | SCORE_HARDEN_BEFORE/20 |
SCORE_HARDEN_AFTER/20 |
✅ ≥17 / ❌ <17 |
| VALIDATE | — | SCORE_VALIDATE_AFTER/20 |
✅ / ❌ / SKIPPED |
SEO classique and GEO (IA) are gated independently — both must reach ≥17/20. Reaching the GEO threshold is harder than SEO classique on many sites because AI-extraction signals (llms.txt, Speakable, QAPage, entity SEO) are still emerging — expect more fix-loop iterations on GEO than on SEO.
Non-web project:
| Audit | Before | After | Status |
|---|---|---|---|
| CSO | SCORE_CSO_BEFORE/20 |
SCORE_CSO_AFTER/20 |
✅ ≥17 / ❌ <17 |
Web: ALL_PASS = (SEO_AFTER ≥ 17/20) AND (GEO_AFTER ≥ 17/20) AND (HARDEN_AFTER ≥ 17/20) AND (VALIDATE_AFTER ≥ 17/20 OR VALIDATE_SKIPPED)
Non-web: ALL_PASS = (CSO_AFTER ≥ 17/20)
GEO gate note: SCORE_GEO_AFTER = "UNKNOWN" is treated as fail —
this typically happens when the SEO subagent produced a legacy single-score
SEO.md without the labeled Score GEO (IA) line. The orchestrator
re-dispatches the SEO subagent with an explicit instruction to emit both
labeled lines (see "Threshold strictness" below).
Use the raw normalized score. No rounding. 16.9/20 fails. 17.0/20 passes.
A score reported as UNKNOWN (no parseable score line in the audit
file) is treated as fail — re-dispatch the audit subagent with an
explicit instruction to add the score lines and re-run the audit.
Do not assume a passing score.
For .claude/audits/SEO.md specifically, the re-dispatch must demand
both labeled lines:
Score SEO (classique) : X.X / 20Score GEO (IA) : X.X / 20A single generic Score: X/20 line is insufficient — the gate will
still mark SCORE_GEO_AFTER = UNKNOWN and fail.
For .claude/audits/HARDEN.md and .claude/audits/CSO.md, a single
Score: X/20 (or X/100) at the top of the report is sufficient.
If the user chose option C (override threshold) at any STEP 4 escalation,
write .claude/audits/THRESHOLD-OVERRIDE.md documenting:
SEO classique: NOT overridden, GEO (IA): overridden)This file is referenced in §7 of the client doc ("Ce qui reste à faire ou à surveiller") so the client knows what's still below the bar.
If ALL_PASS = false:
.claude/audits/HANDOVER-ROADMAP.md (analysis of what's
blocking each below-threshold audit — see structure below)..claude/tasks/TODO.md.Do NOT generate the client doc. Report to the user:
PIPELINE STOPPED — gate failed.
Score table:
<the table above>
Below-threshold audits:
- SEO (classique): <score>/20 — <top 3 remaining issues, one-line each>
- GEO (IA): <score>/20 — <top 3 remaining issues, one-line each>
- HARDEN: <score>/20 — <top 3 remaining issues>
- VALIDATE: <score>/20 — <top 3 remaining issues>
Roadmap written to .claude/audits/HANDOVER-ROADMAP.md.
Tasks appended to .claude/tasks/TODO.md.
Resolve P0 items, then re-run /client-handover.
If ALL_PASS = true → proceed to STEP 9 (memory load + doc generation).
# Handover Roadmap — <project name>
Generated: YYYY-MM-DD HH:MM
Trigger: per-audit threshold violated (rule: every audit must be ≥17/20)
## Score breakdown
| Audit | Before | After | Δ | Status |
|-------------------|--------|-------|-----|-------------------|
| SEO (classique) | 14.4 | 16.2 | +1.8| ❌ BELOW_THRESHOLD |
| GEO (IA) | 11.0 | 13.5 | +2.5| ❌ BELOW_THRESHOLD |
| HARDEN | 12.0 | 18.0 | +6.0| ✅ OK |
| VALIDATE | — | 15.5 | — | ❌ BELOW_THRESHOLD |
## Remaining issues per audit
### SEO classique (<score>/20)
[Extract from `.claude/audits/SEO.md` — the SEO classical issues NOT
auto-fixed. Sort by score-gain potential. For each:]
1. [TYPE] short title
- File: `path:line`
- Fix: <one sentence>
- Score gain: +X.X/20
- Why automatic fix didn't work: <reason — needs judgment, external account, manual content>
### GEO / IA (<score>/20)
[Extract from `.claude/audits/SEO.md` (GEO sections — §7.x) — the GEO
issues NOT auto-fixed. Same per-item format as SEO above. GEO is
gated independently at ≥17/20; below-threshold GEO blocks the handover
just like SEO classique. Common GEO blockers: missing llms.txt /
llms-full.txt, AI crawler robots.txt rules absent, no Schema.org for
AI extraction (QAPage, Speakable, HowTo, Organization graph), no
entity links (sameAs, Wikidata @id), content shape unsuited for LLM
extraction (no TL;DR, no definition lead, no Q→A blocks).]
### HARDEN (<score>/20)
... (same format)
### VALIDATE (<score>/20)
... (same format)
## Action plan
P0 (do first — bring scores ≥17/20):
- [ ] [SEO][...] ...
- [ ] [HARDEN][...] ...
P1 (after threshold passed):
- [ ] ...
P2 (manual / requires user input):
- [ ] ...
## How to use
1. Tackle P0 with dev team.
2. Re-run /client-handover — pipeline restarts from baseline audits.
3. P2 items go to client (will appear in handover doc once gate passes).
(Only reached when STEP 8 gate passes.)
MEMORY_DIR=".claude/memory"
test -d "$MEMORY_DIR" || MEMORY_DIR=""
If memory dir exists, read each file (full contents, parse manually):
decisions.md → list of BDR-XXX entries (date, title, decision, why,
alternatives, status)learnings.md → LRN-XXX entriesblockers.md → BLK-XXX entries (open vs resolved)journal.md → date headings + 3-5 line session summariesevals.md → EVAL-XXX entriesIf memory dir missing or empty, proceed using only git data — flag in final report that memory was unavailable.
git log --reverse --format='%h|%aI|%an|%s' | head -200
git log --name-only --format='---COMMIT---' | grep -v '^---' | sort -u | head -50
git log --diff-filter=A --name-only --format='' | sort -u | wc -l # added
git log --diff-filter=M --name-only --format='' | sort -u | wc -l # modified
git log --diff-filter=D --name-only --format='' | sort -u | wc -l # deleted
git tag --sort=-creatordate | head -5
For projects with 200+ commits, use a sub-agent to cluster commits into
phases (delegate via Agent tool with subagent_type: "Explore" or
general-purpose):
"Read
git log --reverse --format='%h|%aI|%s'for this repo (full output). Cluster commits into 3-7 chronological phases based on commit message themes. For each phase: name, date range, commit count, 2-line summary. Output JSON."
For smaller projects, do it inline.
If $ARGUMENTS does NOT contain --include-deploy or --skip-deploy:
Re-grounding: project = <name>, branch = <current>, all audits passed
(web: SEO classique <score>/20, GEO IA <score>/20, HARDEN <score>/20,
VALIDATE <score>/20 | non-web: CSO <score>/20). Generating client
handover document.
Le client va recevoir un document qui explique ce qui a été fait. Tu veux
qu'on ajoute aussi un chapitre qui lui explique comment construire et
déployer le site lui-même (build, mise en ligne, mise à jour) ? Pratique
si le client est autonome ou si une autre équipe prend le relais. À éviter
si tu déploies pour lui.
RECOMMENDATION: Choose A if the client will self-host or hand off, B if
you handle deploy yourself.
Options:
- A) Yes — include build & deploy chapter
- B) No — skip the deploy chapter (recommended if you deploy for them)
(Translate to English if LANG=en.)
Skip if confident. Only ask if ambiguous.
Included by default. Do NOT ask. Mention in final summary.
If IS_LOCAL_BUSINESS=true, the chapter goes deeper on local listings.
If false, the chapter focuses on general directory + AI search.
Generate the deliverable section by section. Translate headings to LANG.
Tone: friendly, concrete, no jargon. One short paragraph per idea.
# [Project name] — Compte rendu de livraison
## (or: HANDOVER — Project Recap)
> Document préparé le YYYY-MM-DD à l'attention de [client name if known].
> Ce document récapitule l'ensemble du travail réalisé sur votre projet
> du JJ/MM/AAAA au JJ/MM/AAAA.
## 1. En une minute
[2-3 sentences. What is the project, what does it do, current state.]
## 2. Ce que vous avez maintenant
[Bullet list of features as USER BENEFITS. Pull from journal + commit clusters.]
## 3. Comment on en est arrivé là
[3 to 7 phases. For each: what was done, why it mattered. Plain phase names.]
## 4. État de santé du site (avant / après)
[NEW SECTION — score table from STEP 8. SEO classique and GEO (IA) are
shown on separate rows so the client sees both axes explicitly.]
Avant la passe finale → après la passe finale (cette semaine) :
| Domaine | Avant | Après | Statut |
|------------------------------------------|-----------:|-----------:|:------:|
| Référencement Google (SEO classique) | <X.X>/20 | <Y.Y>/20 | ✅ |
| Visibilité IA (GEO — ChatGPT, Perplexity)| <X.X>/20 | <Y.Y>/20 | ✅ |
| Sécurité du site | <X.X>/20 | <Y.Y>/20 | ✅ |
| Conformité technique (W3C) | — | <Z.Z>/20 | ✅ |
[If LANG=en: "Site health (before / after)" with the same columns.
Use these column labels: "Domain" / "Before" / "After" / "Status".
Row labels: "Google search (classical SEO)", "AI visibility (GEO —
ChatGPT, Perplexity)", "Site security", "Technical compliance (W3C)".]
Plain explanation under the table:
- **Référencement Google (SEO classique)** = comment Google, Bing et
les autres moteurs traditionnels trouvent et classent votre site.
C'est ce qui amène la majorité du trafic aujourd'hui.
- **Visibilité IA (GEO)** = comment les moteurs de recherche par IA
(ChatGPT, Perplexity, Gemini, Google AI Overviews) lisent et citent
votre site. Trafic encore minoritaire mais en forte croissance —
votre site est maintenant prêt pour ce canal (llms.txt, données
structurées pour extraction IA, signaux d'entité).
- **Sécurité** = protections contre les attaques courantes (en-têtes
HTTPS, anti-injection, etc.).
- **Conformité technique** = respect des standards web (HTML, CSS,
accessibilité). Ouvert dans la plupart des navigateurs et lecteurs
d'écran sans bug.
[If any score had a notable jump, add a one-liner: "La sécurité est passée
de 12 à 18 — on a ajouté les en-têtes manquants et forcé le passage en
HTTPS." Do the same for SEO and GEO independently if either jumped.]
## 5. Les choix importants qu'on a faits
[Vulgarize BDR entries. 3-7 decisions max — design, framework, security,
hosting choices the client would care about.]
## 6. Ce qu'on a appris en route (optionnel)
[Only if learnings.md has client-relevant entries. 3-5 bullets max.]
## 7. Ce qui reste à faire ou à surveiller
[From blockers.md (open) + code TODOs. Plain description, urgency,
trigger.]
## 8. Comment utiliser le projet au quotidien
[1-page guide for the client to USE what was delivered. URL, CMS, contact.]
## 9. Ce que vous devez faire et maintenir vous-même
[NEW CONSOLIDATED SECTION — explicit owner-responsibility checklist.
Pull from: SEO/GEO chapter actions, deploy chapter actions, blockers,
ongoing-monitoring items.
Format as actionable checklist grouped by cadence:
### Une fois (à faire dans les premières semaines)
- [ ] Réclamer la fiche Google Business Profile et la vérifier (lien : ...)
- [ ] Compléter le profil Apple Business Connect (lien : ...)
- [ ] Vérifier la cohérence NAP (Nom / Adresse / Téléphone) sur toutes
les plateformes — voir tableau au §10
- [ ] [Si self-host : configurer le certificat SSL (renouvellement auto Let's Encrypt)]
- [ ] [Si self-host : programmer une sauvegarde quotidienne]
- [ ] Sauvegarder ce document hors du dépôt (PDF, email)
### Mensuel
- [ ] Ajouter / mettre à jour 5 photos sur Google Business
- [ ] Répondre aux avis Google (positifs et négatifs)
- [ ] Vérifier que le site est toujours en ligne (test simple : ouvrir
l'URL depuis un autre appareil)
- [ ] [Si CMS : mettre à jour les contenus saisonniers]
### Trimestriel
- [ ] Faire un test de visibilité IA : taper le nom du commerce dans
ChatGPT, Perplexity, Gemini. Noter ce qui s'affiche.
- [ ] Demander à 3-5 clients de laisser un avis Google
- [ ] Publier un post Google Business (offre, événement, actualité)
### Annuel
- [ ] Mettre à jour la photo de couverture Google Business
- [ ] Vérifier que les horaires saisonniers sont bons
- [ ] Renouveler les noms de domaine
### Quand quelque chose change dans la vie du commerce
- [ ] Changement d'adresse / téléphone / horaires → modifier d'abord sur
Google Business, puis sur toutes les autres plateformes (la
cohérence est cruciale, voir §10)
[Adapt cadences to project type. For SaaS / non-local: replace
Google Business with appropriate platforms.]
[3-5 concrete suggestions. Phrase as opportunities.]
[Pointer for the technically curious — README, source repo, etc.]
Document généré automatiquement à partir de l'historique du projet et des audits de santé. Pour toute question, contactez [contact].
### Tone rules
1. Address the client directly ("votre site", "vous pouvez").
2. Replace tech terms with user-facing equivalents.
3. No abbreviations the client wouldn't use.
4. Concrete numbers > adjectives.
5. Short paragraphs. Bullet lists for things you can count.
6. **Score deltas explained in plain words**. Never just dump numbers.
7. **Owner-responsibility section is action-oriented**. Every line starts
with a verb. Every line is something the client can do without a dev.
---
## STEP 13 — SEO/GEO MANUAL CHECKLIST (web projects only)
If `PROJECT_TYPE=web` AND `--skip-seo` NOT set, append this chapter
(numbered §10 in the doc).
Read the resource file:
`$HOME/.claude/skills/client-handover/checklists/seo-geo-manual.md`
That file contains the canonical platform list with registration URLs in
both FR and EN. Use the section matching `LANG` and `IS_LOCAL_BUSINESS`.
If the file is unreachable, fall back to the inline platform list at the
bottom of this agent.
The chapter must include:
1. **Pourquoi c'est important** (1 paragraph). Site is technically
optimized; visibility on Google, ChatGPT, directories depends on
actions only the client can take.
2. **NAP consistency** (table). Same exact spelling EVERYWHERE.
| Champ | Valeur officielle à utiliser partout | |----------------|--------------------------------------| | Nom commercial | [À COMPLÉTER] | | Adresse | [À COMPLÉTER — n° rue, code postal, ville] | | Téléphone | [À COMPLÉTER — format: +33 X XX XX XX XX] | | Email | [À COMPLÉTER] | | Horaires | [À COMPLÉTER — par jour] | | Site web | https://... |
3. **Platform checklist** (priority-ordered table per `IS_LOCAL_BUSINESS`).
Each row: Plateforme | Pourquoi | Lien d'inscription | Action | Statut.
4. **AI search visibility (GEO)**. Plain explanation + actions: Wikidata,
Knowledge Panel, llms.txt, periodic re-audit.
5. **Reviews & reputation**.
6. **Photos & content**.
7. **Schedule** (Semaine 1 / Mois 1 / Mois 3 / Trimestriel).
8. **Outils gratuits pour vérifier votre présence**.
Cross-link this chapter from §9 (owner responsibilities). Items in §13
that are recurring belong in §9's cadence checklist.
---
## STEP 14 — BUILD & DEPLOY CHAPTER (only if Q1=Yes)
For each `DEPLOY_HINTS` match, generate a short subsection:
1. What this means (1 paragraph).
2. First-time setup (numbered steps + signup link).
3. Day-to-day deploy (typical command / click sequence).
4. How to know it worked (where to check URL, where to find logs).
5. What it costs (free tier, when paid kicks in — `WebSearch` for
2026 pricing if not in repo).
6. Who to call when it breaks (status page, support link).
If no deploy hints, offer 2-3 standard options:
- Static site → Netlify / Vercel / Cloudflare Pages
- Webapp → Fly.io / Render / Vercel / Railway
- CLI / library → npm / PyPI / crates.io / Homebrew
For each: signup + 5-step deploy walkthrough.
---
## STEP 15 — WRITE OUTPUT
Default output path: project root.
- `LIVRAISON.md` if `LANG=fr`
- `HANDOVER.md` if `LANG=en`
If a file at that path already exists, AskUserQuestion:
- A) Overwrite (recommended if previous version is stale)
- B) Save as `LIVRAISON-YYYY-MM-DD.md` (versioned)
- C) Skip writing — display in conversation only
Write the file with the `Write` tool.
Sanity check:
bash wc -l # expect 200-800 lines grep -c "^## " # expect 8-13 chapters (NEW: §4 health, §9 owner-resp)
---
## STEP 16 — FINAL REPORT
Output to the user:
DONE — ship-and-handover pipeline complete.
OUTPUT: LANGUAGE: fr | en PROJECT TYPE: web (local-business) | web | cli | library | mobile | other COMMITS ANALYZED: from to
PIPELINE RESULT (web): SEO /20 → /20 ✅ (iterations: ) HARDEN /20 → /20 ✅ (iterations: ) VALIDATE — → /20 ✅ (post-deploy)
PIPELINE RESULT (non-web): CSO /20 → /20 ✅ (iterations: )
PIPELINE COMMITS: (pushed: yes/no) DEPLOY: confirmed at — URL: DECISIONS VULGARIZED: BLOCKERS REMAINING: (open)
DOC SECTIONS WRITTEN: §1-3 Project recap §4 Site health (before/after) ← NEW §5 Key decisions §6 Lessons learned (optional) §7 Open items / things to monitor §8 Day-to-day usage §9 Owner responsibilities checklist ← NEW §10 SEO/GEO manual chapter (web) §11 Build & deploy chapter (only if requested) §12 Pour aller plus loin
Next steps for the user:
Walk through §9 (owner responsibilities) with the client during the handover meeting — it's the part they MUST act on.
If anything was skipped or uncertain, list under `CONCERNS:`.
---
## VOICE RULES (the whole document)
- **Vulgarize, don't dumb down.**
- **No emojis** unless the project itself uses them prominently.
- **No marketing fluff.**
- **Concrete numbers > adjectives.**
- **Lead with the user benefit.**
- **Acknowledge limits.**
- **No false modesty, no false confidence.**
---
## ESCALATION
If at any step you cannot proceed:
STATUS: BLOCKED | NEEDS_CONTEXT STEP: REASON: [1-2 sentences] ATTEMPTED: [what you tried] RECOMMENDATION: [what the user should do next] PIPELINE STATE: <which scores captured, which commits made, which
files modified — so user can resume>
```
Local-business priority order with 2026 signup URLs:
Niche-specific:
Non-local web priority:
AI visibility (GEO):
sameAsIf you need 2026-current pricing, signup steps, or a platform you're
unsure exists, use WebSearch and confirm before listing it. Do NOT
invent links.
| Situation | Behavior |
|---|---|
| Repo has < 3 commits since first commit | Skip phase clustering in §3 of the deliverable; emit a short "First milestone" note instead. Do not fabricate phases. |
git log empty (newly-initialised repo, no commit yet) |
Print "⚠️ no git history — handover doc requires at least one commit. Run /commit-change first." and STOP before generating the doc. |
Audit file exists but Score: line is malformed after re-dispatch retry |
Mark SCORE_<X>_AFTER=UNKNOWN. Treat as below-threshold for STEP 8 gate (cannot certify). Append diagnostic to HANDOVER-ROADMAP.md: "<audit> score unparseable — re-run /<audit> manually." |
| Audit file missing entirely after STEP 4 attempts | Same as malformed: UNKNOWN, gate fails. Note "<audit> file absent — auto-fix loop produced no output, see .claude/audits/." |
User confirms deploy in STEP 6 but DEPLOYED_URL is still empty |
Re-prompt once: "You confirmed Yes — what's the deployed URL? (paste URL or 'skip-validate' to set VALIDATE_SKIPPED=true)". On second empty answer, set VALIDATE_SKIPPED=true and proceed to STEP 8. |
| Deploy URL paste returns HTTP 0 / DNS failure during STEP 7 | Retry once after 30s. Still failing → set VALIDATE_SKIPPED=true with reason "unreachable: <error>". Do not block the handover doc. |
.claude/memory/ registries do not exist |
Skip the "Decisions / Learnings / Blockers" section in §3 with a one-line note: "(no .claude/memory/ — registries not initialised on this project)." Do not create them here — that is /onboard's job. |
--skip-audits flag passed but .claude/audits/ empty |
STOP with "--skip-audits requires existing audit files in .claude/audits/. None found — drop the flag or run /seo and /harden first." |
| Output file (LIVRAISON.md / HANDOVER.md) already exists | Show diff vs. new content. Ask "overwrite / save as -v2 / abort?". Default behavior must not silently overwrite a curated client doc. |