Просмотр исходного кода

add gstack skill symlinks, graphify skill, and gitignore updates

Track all gstack-provided skill symlinks (autoplan, browse, qa, etc.)
and the graphify skill. Add .claude/, graphify-out/, .ctx7-cache/ to
gitignore to exclude local/generated files from the repo.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
bastien 1 месяц назад
Родитель
Сommit
fb3e397c67
42 измененных файлов с 1333 добавлено и 23 удалено
  1. 7 0
      .gitignore
  2. 12 0
      CLAUDE.md
  3. 1 0
      skills/autoplan/SKILL.md
  4. 1 0
      skills/benchmark/SKILL.md
  5. 1 0
      skills/browse/SKILL.md
  6. 1 0
      skills/canary/SKILL.md
  7. 1 0
      skills/careful/SKILL.md
  8. 1 0
      skills/checkpoint/SKILL.md
  9. 1 0
      skills/codex/SKILL.md
  10. 1 0
      skills/connect-chrome/SKILL.md
  11. 1 0
      skills/cso/SKILL.md
  12. 1 0
      skills/design-consultation/SKILL.md
  13. 1 0
      skills/design-html/SKILL.md
  14. 1 0
      skills/design-review/SKILL.md
  15. 1 0
      skills/design-shotgun/SKILL.md
  16. 1 0
      skills/devex-review/SKILL.md
  17. 1 0
      skills/document-release/SKILL.md
  18. 1 0
      skills/freeze/SKILL.md
  19. 1 0
      skills/graphify/.graphify_version
  20. 1276 0
      skills/graphify/SKILL.md
  21. 1 0
      skills/gstack-upgrade/SKILL.md
  22. 1 0
      skills/guard/SKILL.md
  23. 0 23
      skills/health/SKILL.md
  24. 1 0
      skills/health/SKILL.md
  25. 1 0
      skills/investigate/SKILL.md
  26. 1 0
      skills/land-and-deploy/SKILL.md
  27. 1 0
      skills/learn/SKILL.md
  28. 1 0
      skills/office-hours/SKILL.md
  29. 1 0
      skills/open-gstack-browser/SKILL.md
  30. 1 0
      skills/pair-agent/SKILL.md
  31. 1 0
      skills/plan-ceo-review/SKILL.md
  32. 1 0
      skills/plan-design-review/SKILL.md
  33. 1 0
      skills/plan-devex-review/SKILL.md
  34. 1 0
      skills/plan-eng-review/SKILL.md
  35. 1 0
      skills/qa-only/SKILL.md
  36. 1 0
      skills/qa/SKILL.md
  37. 1 0
      skills/retro/SKILL.md
  38. 1 0
      skills/review/SKILL.md
  39. 1 0
      skills/setup-browser-cookies/SKILL.md
  40. 1 0
      skills/setup-deploy/SKILL.md
  41. 1 0
      skills/ship/SKILL.md
  42. 1 0
      skills/unfreeze/SKILL.md

+ 7 - 0
.gitignore

@@ -3,6 +3,13 @@
 # Symlink created by link.sh (GStack submodule target)
 # Symlink created by link.sh (GStack submodule target)
 skills/gstack
 skills/gstack
 
 
+# Local project config (per-machine, not shared)
+.claude/
+
+# Generated outputs
+graphify-out/
+.ctx7-cache/
+
 # Install logs
 # Install logs
 install-*.log
 install-*.log
 
 

+ 12 - 0
CLAUDE.md

@@ -56,3 +56,15 @@ Apply unless repo-specific instructions override.
 
 
 - Stop if requirements are unclear. Ask, don't guess.
 - Stop if requirements are unclear. Ask, don't guess.
 - No invented context. List unknowns before continuing.
 - No invented context. List unknowns before continuing.
+# graphify
+- **graphify** (`~/.claude/skills/graphify/SKILL.md`) - any input to knowledge graph. Trigger: `/graphify`
+When the user types `/graphify`, invoke the Skill tool with `skill: "graphify"` before doing anything else.
+
+## graphify
+
+This project has a graphify knowledge graph at graphify-out/.
+
+Rules:
+- Before answering architecture or codebase questions, read graphify-out/GRAPH_REPORT.md for god nodes and community structure
+- If graphify-out/wiki/index.md exists, navigate it instead of reading raw files
+- After modifying code files in this session, run `python3 -c "from graphify.watch import _rebuild_code; from pathlib import Path; _rebuild_code(Path('.'))"` to keep the graph current

+ 1 - 0
skills/autoplan/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/autoplan/SKILL.md

+ 1 - 0
skills/benchmark/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/benchmark/SKILL.md

+ 1 - 0
skills/browse/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/browse/SKILL.md

+ 1 - 0
skills/canary/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/canary/SKILL.md

+ 1 - 0
skills/careful/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/careful/SKILL.md

+ 1 - 0
skills/checkpoint/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/checkpoint/SKILL.md

+ 1 - 0
skills/codex/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/codex/SKILL.md

+ 1 - 0
skills/connect-chrome/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/connect-chrome/SKILL.md

+ 1 - 0
skills/cso/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/cso/SKILL.md

+ 1 - 0
skills/design-consultation/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/design-consultation/SKILL.md

+ 1 - 0
skills/design-html/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/design-html/SKILL.md

+ 1 - 0
skills/design-review/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/design-review/SKILL.md

+ 1 - 0
skills/design-shotgun/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/design-shotgun/SKILL.md

+ 1 - 0
skills/devex-review/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/devex-review/SKILL.md

+ 1 - 0
skills/document-release/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/document-release/SKILL.md

+ 1 - 0
skills/freeze/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/freeze/SKILL.md

+ 1 - 0
skills/graphify/.graphify_version

@@ -0,0 +1 @@
+0.4.2

+ 1276 - 0
skills/graphify/SKILL.md

@@ -0,0 +1,1276 @@
+---
+name: graphify
+description: any input (code, docs, papers, images) → knowledge graph → clustered communities → HTML + JSON + audit report
+trigger: /graphify
+---
+
+# /graphify
+
+Turn any folder of files into a navigable knowledge graph with community detection, an honest audit trail, and three outputs: interactive HTML, GraphRAG-ready JSON, and a plain-language GRAPH_REPORT.md.
+
+## Usage
+
+```
+/graphify                                             # full pipeline on current directory → Obsidian vault
+/graphify <path>                                      # full pipeline on specific path
+/graphify <path> --mode deep                          # thorough extraction, richer INFERRED edges
+/graphify <path> --update                             # incremental - re-extract only new/changed files
+/graphify <path> --directed                            # build directed graph (preserves edge direction: source→target)
+/graphify <path> --whisper-model medium                # use a larger Whisper model for better transcription accuracy
+/graphify <path> --cluster-only                       # rerun clustering on existing graph
+/graphify <path> --no-viz                             # skip visualization, just report + JSON
+/graphify <path> --html                               # (HTML is generated by default - this flag is a no-op)
+/graphify <path> --svg                                # also export graph.svg (embeds in Notion, GitHub)
+/graphify <path> --graphml                            # export graph.graphml (Gephi, yEd)
+/graphify <path> --neo4j                              # generate graphify-out/cypher.txt for Neo4j
+/graphify <path> --neo4j-push bolt://localhost:7687   # push directly to Neo4j
+/graphify <path> --mcp                                # start MCP stdio server for agent access
+/graphify <path> --watch                              # watch folder, auto-rebuild on code changes (no LLM needed)
+/graphify <path> --wiki                               # build agent-crawlable wiki (index.md + one article per community)
+/graphify <path> --obsidian --obsidian-dir ~/vaults/my-project  # write vault to custom path (e.g. existing vault)
+/graphify add <url>                                   # fetch URL, save to ./raw, update graph
+/graphify add <url> --author "Name"                   # tag who wrote it
+/graphify add <url> --contributor "Name"              # tag who added it to the corpus
+/graphify query "<question>"                          # BFS traversal - broad context
+/graphify query "<question>" --dfs                    # DFS - trace a specific path
+/graphify query "<question>" --budget 1500            # cap answer at N tokens
+/graphify path "AuthModule" "Database"                # shortest path between two concepts
+/graphify explain "SwinTransformer"                   # plain-language explanation of a node
+```
+
+## What graphify is for
+
+graphify is built around Andrej Karpathy's /raw folder workflow: drop anything into a folder - papers, tweets, screenshots, code, notes - and get a structured knowledge graph that shows you what you didn't know was connected.
+
+Three things it does that Claude alone cannot:
+1. **Persistent graph** - relationships are stored in `graphify-out/graph.json` and survive across sessions. Ask questions weeks later without re-reading everything.
+2. **Honest audit trail** - every edge is tagged EXTRACTED, INFERRED, or AMBIGUOUS. You know what was found vs invented.
+3. **Cross-document surprise** - community detection finds connections between concepts in different files that you would never think to ask about directly.
+
+Use it for:
+- A codebase you're new to (understand architecture before touching anything)
+- A reading list (papers + tweets + notes → one navigable graph)
+- A research corpus (citation graph + concept graph in one)
+- Your personal /raw folder (drop everything in, let it grow, query it)
+
+## What You Must Do When Invoked
+
+If no path was given, use `.` (current directory). Do not ask the user for a path.
+
+Follow these steps in order. Do not skip steps.
+
+### Step 1 - Ensure graphify is installed
+
+```bash
+# Detect the correct Python interpreter (handles pipx, venv, system installs)
+GRAPHIFY_BIN=$(which graphify 2>/dev/null)
+if [ -n "$GRAPHIFY_BIN" ]; then
+    PYTHON=$(head -1 "$GRAPHIFY_BIN" | tr -d '#!')
+    case "$PYTHON" in
+        *[!a-zA-Z0-9/_.-]*) PYTHON="python3" ;;
+    esac
+else
+    PYTHON="python3"
+fi
+"$PYTHON" -c "import graphify" 2>/dev/null || "$PYTHON" -m pip install graphifyy -q 2>/dev/null || "$PYTHON" -m pip install graphifyy -q --break-system-packages 2>&1 | tail -3
+# Write interpreter path for all subsequent steps (persists across invocations)
+mkdir -p graphify-out
+"$PYTHON" -c "import sys; open('graphify-out/.graphify_python', 'w').write(sys.executable)"
+```
+
+If the import succeeds, print nothing and move straight to Step 2.
+
+**In every subsequent bash block, replace `python3` with `$(cat graphify-out/.graphify_python)` to use the correct interpreter.**
+
+### Step 2 - Detect files
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import json
+from graphify.detect import detect
+from pathlib import Path
+result = detect(Path('INPUT_PATH'))
+print(json.dumps(result))
+" > graphify-out/.graphify_detect.json
+```
+
+Replace INPUT_PATH with the actual path the user provided. Do NOT cat or print the JSON - read it silently and present a clean summary instead:
+
+```
+Corpus: X files · ~Y words
+  code:     N files (.py .ts .go ...)
+  docs:     N files (.md .txt ...)
+  papers:   N files (.pdf ...)
+  images:   N files
+  video:    N files (.mp4 .mp3 ...)
+```
+
+Omit any category with 0 files from the summary.
+
+Then act on it:
+- If `total_files` is 0: stop with "No supported files found in [path]."
+- If `skipped_sensitive` is non-empty: mention file count skipped, not the file names.
+- If `total_words` > 2,000,000 OR `total_files` > 200: show the warning and the top 5 subdirectories by file count, then ask which subfolder to run on. Wait for the user's answer before proceeding.
+- Otherwise: proceed directly to Step 2.5 if video files were detected, or Step 3 if not.
+
+### Step 2.5 - Transcribe video / audio files (only if video files detected)
+
+Skip this step entirely if `detect` returned zero `video` files.
+
+Video and audio files cannot be read directly. Transcribe them to text first, then treat the transcripts as doc files in Step 3.
+
+**Strategy:** Read the god nodes from `graphify-out/.graphify_detect.json` (or the analysis file if it exists from a previous run). You are already a language model — write a one-sentence domain hint yourself from those labels. Then pass it to Whisper as the initial prompt. No separate API call needed.
+
+**However**, if the corpus has *only* video files and no other docs/code, use the generic fallback prompt: `"Use proper punctuation and paragraph breaks."`
+
+**Step 1 - Write the Whisper prompt yourself.**
+
+Read the top god node labels from detect output or analysis, then compose a short domain hint sentence, for example:
+
+- Labels: `transformer, attention, encoder, decoder` → `"Machine learning research on transformer architectures and attention mechanisms. Use proper punctuation and paragraph breaks."`
+- Labels: `kubernetes, deployment, pod, helm` → `"DevOps discussion about Kubernetes deployments and Helm charts. Use proper punctuation and paragraph breaks."`
+
+Set it as `WHISPER_PROMPT` to use in the next command.
+
+**Step 2 - Transcribe:**
+
+```bash
+GRAPHIFY_WHISPER_MODEL=base  # or whatever --whisper-model the user passed
+$(cat graphify-out/.graphify_python) -c "
+import json, os
+from pathlib import Path
+from graphify.transcribe import transcribe_all
+
+detect = json.loads(Path('graphify-out/.graphify_detect.json').read_text())
+video_files = detect.get('files', {}).get('video', [])
+prompt = os.environ.get('GRAPHIFY_WHISPER_PROMPT', 'Use proper punctuation and paragraph breaks.')
+
+transcript_paths = transcribe_all(video_files, initial_prompt=prompt)
+print(json.dumps(transcript_paths))
+" > graphify-out/.graphify_transcripts.json
+```
+
+After transcription:
+- Read the transcript paths from `graphify-out/.graphify_transcripts.json`
+- Add them to the docs list before dispatching semantic subagents in Step 3B
+- Print how many transcripts were created: `Transcribed N video file(s) -> treating as docs`
+- If transcription fails for a file, print a warning and continue with the rest
+
+**Whisper model:** Default is `base`. If the user passed `--whisper-model <name>`, set `GRAPHIFY_WHISPER_MODEL=<name>` in the environment before running the command above.
+
+### Step 3 - Extract entities and relationships
+
+**Before starting:** note whether `--mode deep` was given. You must pass `DEEP_MODE=true` to every subagent in Step B2 if it was. Track this from the original invocation - do not lose it.
+
+This step has two parts: **structural extraction** (deterministic, free) and **semantic extraction** (Claude, costs tokens).
+
+**Run Part A (AST) and Part B (semantic) in parallel. Dispatch all semantic subagents AND start AST extraction in the same message. Both can run simultaneously since they operate on different file types. Merge results in Part C as before.**
+
+Note: Parallelizing AST + semantic saves 5-15s on large corpora. AST is deterministic and fast; start it while subagents are processing docs/papers.
+
+#### Part A - Structural extraction for code files
+
+For any code files detected, run AST extraction in parallel with Part B subagents:
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import sys, json
+from graphify.extract import collect_files, extract
+from pathlib import Path
+import json
+
+code_files = []
+detect = json.loads(Path('graphify-out/.graphify_detect.json').read_text())
+for f in detect.get('files', {}).get('code', []):
+    code_files.extend(collect_files(Path(f)) if Path(f).is_dir() else [Path(f)])
+
+if code_files:
+    result = extract(code_files)
+    Path('graphify-out/.graphify_ast.json').write_text(json.dumps(result, indent=2))
+    print(f'AST: {len(result[\"nodes\"])} nodes, {len(result[\"edges\"])} edges')
+else:
+    Path('graphify-out/.graphify_ast.json').write_text(json.dumps({'nodes':[],'edges':[],'input_tokens':0,'output_tokens':0}))
+    print('No code files - skipping AST extraction')
+"
+```
+
+#### Part B - Semantic extraction (parallel subagents)
+
+**Fast path:** If detection found zero docs, papers, and images (code-only corpus), skip Part B entirely and go straight to Part C. AST handles code - there is nothing for semantic subagents to do.
+
+**MANDATORY: You MUST use the Agent tool here. Reading files yourself one-by-one is forbidden - it is 5-10x slower. If you do not use the Agent tool you are doing this wrong.**
+
+Before dispatching subagents, print a timing estimate:
+- Load `total_words` and file counts from `graphify-out/.graphify_detect.json`
+- Estimate agents needed: `ceil(uncached_non_code_files / 22)` (chunk size is 20-25)
+- Estimate time: ~45s per agent batch (they run in parallel, so total ≈ 45s × ceil(agents/parallel_limit))
+- Print: "Semantic extraction: ~N files → X agents, estimated ~Ys"
+
+**Step B0 - Check extraction cache first**
+
+Before dispatching any subagents, check which files already have cached extraction results:
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import json
+from graphify.cache import check_semantic_cache
+from pathlib import Path
+
+detect = json.loads(Path('graphify-out/.graphify_detect.json').read_text())
+all_files = [f for files in detect['files'].values() for f in files]
+
+cached_nodes, cached_edges, cached_hyperedges, uncached = check_semantic_cache(all_files)
+
+if cached_nodes or cached_edges or cached_hyperedges:
+    Path('graphify-out/.graphify_cached.json').write_text(json.dumps({'nodes': cached_nodes, 'edges': cached_edges, 'hyperedges': cached_hyperedges}))
+Path('graphify-out/.graphify_uncached.txt').write_text('\n'.join(uncached))
+print(f'Cache: {len(all_files)-len(uncached)} files hit, {len(uncached)} files need extraction')
+"
+```
+
+Only dispatch subagents for files listed in `graphify-out/.graphify_uncached.txt`. If all files are cached, skip to Part C directly.
+
+**Step B1 - Split into chunks**
+
+Load files from `graphify-out/.graphify_uncached.txt`. Split into chunks of 20-25 files each. Each image gets its own chunk (vision needs separate context). When splitting, group files from the same directory together so related artifacts land in the same chunk and cross-file relationships are more likely to be extracted.
+
+**Step B2 - Dispatch ALL subagents in a single message**
+
+Call the Agent tool multiple times IN THE SAME RESPONSE - one call per chunk. This is the only way they run in parallel. If you make one Agent call, wait, then make another, you are doing it sequentially and defeating the purpose.
+
+**IMPORTANT - subagent type:** Always use `subagent_type="general-purpose"`. Do NOT use `Explore` - it is read-only and cannot write chunk files to disk, which silently drops extraction results. General-purpose has Write and Bash access which the subagent needs.
+
+Concrete example for 3 chunks:
+```
+[Agent tool call 1: files 1-15, subagent_type="general-purpose"]
+[Agent tool call 2: files 16-30, subagent_type="general-purpose"]
+[Agent tool call 3: files 31-45, subagent_type="general-purpose"]
+```
+All three in one message. Not three separate messages.
+
+Each subagent receives this exact prompt (substitute FILE_LIST, CHUNK_NUM, TOTAL_CHUNKS, and DEEP_MODE):
+
+```
+You are a graphify extraction subagent. Read the files listed and extract a knowledge graph fragment.
+Output ONLY valid JSON matching the schema below - no explanation, no markdown fences, no preamble.
+
+Files (chunk CHUNK_NUM of TOTAL_CHUNKS):
+FILE_LIST
+
+Rules:
+- EXTRACTED: relationship explicit in source (import, call, citation, "see §3.2")
+- INFERRED: reasonable inference (shared data structure, implied dependency)
+- AMBIGUOUS: uncertain - flag for review, do not omit
+
+Code files: focus on semantic edges AST cannot find (call relationships, shared data, arch patterns).
+  Do not re-extract imports - AST already has those.
+Doc/paper files: extract named concepts, entities, citations. Also extract rationale — sections that explain WHY a decision was made, trade-offs chosen, or design intent. These become nodes with `rationale_for` edges pointing to the concept they explain.
+Image files: use vision to understand what the image IS - do not just OCR.
+  UI screenshot: layout patterns, design decisions, key elements, purpose.
+  Chart: metric, trend/insight, data source.
+  Tweet/post: claim as node, author, concepts mentioned.
+  Diagram: components and connections.
+  Research figure: what it demonstrates, method, result.
+  Handwritten/whiteboard: ideas and arrows, mark uncertain readings AMBIGUOUS.
+
+DEEP_MODE (if --mode deep was given): be aggressive with INFERRED edges - indirect deps,
+  shared assumptions, latent couplings. Mark uncertain ones AMBIGUOUS instead of omitting.
+
+Semantic similarity: if two concepts in this chunk solve the same problem or represent the same idea without any structural link (no import, no call, no citation), add a `semantically_similar_to` edge marked INFERRED with a confidence_score reflecting how similar they are (0.6-0.95). Examples:
+- Two functions that both validate user input but never call each other
+- A class in code and a concept in a paper that describe the same algorithm
+- Two error types that handle the same failure mode differently
+Only add these when the similarity is genuinely non-obvious and cross-cutting. Do not add them for trivially similar things.
+
+Hyperedges: if 3 or more nodes clearly participate together in a shared concept, flow, or pattern that is not captured by pairwise edges alone, add a hyperedge to a top-level `hyperedges` array. Examples:
+- All classes that implement a common protocol or interface
+- All functions in an authentication flow (even if they don't all call each other)
+- All concepts from a paper section that form one coherent idea
+Use sparingly — only when the group relationship adds information beyond the pairwise edges. Maximum 3 hyperedges per chunk.
+
+If a file has YAML frontmatter (--- ... ---), copy source_url, captured_at, author,
+  contributor onto every node from that file.
+
+confidence_score is REQUIRED on every edge - never omit it, never use 0.5 as a default:
+- EXTRACTED edges: confidence_score = 1.0 always
+- INFERRED edges: reason about each edge individually.
+  Direct structural evidence (shared data structure, clear dependency): 0.8-0.9.
+  Reasonable inference with some uncertainty: 0.6-0.7.
+  Weak or speculative: 0.4-0.5. Most edges should be 0.6-0.9, not 0.5.
+- AMBIGUOUS edges: 0.1-0.3
+
+Output exactly this JSON (no other text):
+{"nodes":[{"id":"filestem_entityname","label":"Human Readable Name","file_type":"code|document|paper|image","source_file":"relative/path","source_location":null,"source_url":null,"captured_at":null,"author":null,"contributor":null}],"edges":[{"source":"node_id","target":"node_id","relation":"calls|implements|references|cites|conceptually_related_to|shares_data_with|semantically_similar_to|rationale_for","confidence":"EXTRACTED|INFERRED|AMBIGUOUS","confidence_score":1.0,"source_file":"relative/path","source_location":null,"weight":1.0}],"hyperedges":[{"id":"snake_case_id","label":"Human Readable Label","nodes":["node_id1","node_id2","node_id3"],"relation":"participate_in|implement|form","confidence":"EXTRACTED|INFERRED","confidence_score":0.75,"source_file":"relative/path"}],"input_tokens":0,"output_tokens":0}
+```
+
+**Step B3 - Collect, cache, and merge**
+
+Wait for all subagents. For each result:
+- Check that `graphify-out/.graphify_chunk_NN.json` exists on disk — this is the success signal
+- If the file exists and contains valid JSON with `nodes` and `edges`, include it and save to cache
+- If the file is missing, the subagent was likely dispatched as read-only (Explore type) — print a warning: "chunk N missing from disk — subagent may have been read-only. Re-run with general-purpose agent." Do not silently skip.
+- If a subagent failed or returned invalid JSON, print a warning and skip that chunk - do not abort
+
+If more than half the chunks failed or are missing, stop and tell the user to re-run and ensure `subagent_type="general-purpose"` is used.
+
+Save new results to cache:
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import json
+from graphify.cache import save_semantic_cache
+from pathlib import Path
+
+new = json.loads(Path('graphify-out/.graphify_semantic_new.json').read_text()) if Path('graphify-out/.graphify_semantic_new.json').exists() else {'nodes':[],'edges':[],'hyperedges':[]}
+saved = save_semantic_cache(new.get('nodes', []), new.get('edges', []), new.get('hyperedges', []))
+print(f'Cached {saved} files')
+"
+```
+
+Merge cached + new results into `graphify-out/.graphify_semantic.json`:
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import json
+from pathlib import Path
+
+cached = json.loads(Path('graphify-out/.graphify_cached.json').read_text()) if Path('graphify-out/.graphify_cached.json').exists() else {'nodes':[],'edges':[],'hyperedges':[]}
+new = json.loads(Path('graphify-out/.graphify_semantic_new.json').read_text()) if Path('graphify-out/.graphify_semantic_new.json').exists() else {'nodes':[],'edges':[],'hyperedges':[]}
+
+all_nodes = cached['nodes'] + new.get('nodes', [])
+all_edges = cached['edges'] + new.get('edges', [])
+all_hyperedges = cached.get('hyperedges', []) + new.get('hyperedges', [])
+seen = set()
+deduped = []
+for n in all_nodes:
+    if n['id'] not in seen:
+        seen.add(n['id'])
+        deduped.append(n)
+
+merged = {
+    'nodes': deduped,
+    'edges': all_edges,
+    'hyperedges': all_hyperedges,
+    'input_tokens': new.get('input_tokens', 0),
+    'output_tokens': new.get('output_tokens', 0),
+}
+Path('graphify-out/.graphify_semantic.json').write_text(json.dumps(merged, indent=2))
+print(f'Extraction complete - {len(deduped)} nodes, {len(all_edges)} edges ({len(cached[\"nodes\"])} from cache, {len(new.get(\"nodes\",[]))} new)')
+"
+```
+Clean up temp files: `rm -f graphify-out/.graphify_cached.json graphify-out/.graphify_uncached.txt graphify-out/.graphify_semantic_new.json`
+
+#### Part C - Merge AST + semantic into final extraction
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import sys, json
+from pathlib import Path
+
+ast = json.loads(Path('graphify-out/.graphify_ast.json').read_text())
+sem = json.loads(Path('graphify-out/.graphify_semantic.json').read_text())
+
+# Merge: AST nodes first, semantic nodes deduplicated by id
+seen = {n['id'] for n in ast['nodes']}
+merged_nodes = list(ast['nodes'])
+for n in sem['nodes']:
+    if n['id'] not in seen:
+        merged_nodes.append(n)
+        seen.add(n['id'])
+
+merged_edges = ast['edges'] + sem['edges']
+merged_hyperedges = sem.get('hyperedges', [])
+merged = {
+    'nodes': merged_nodes,
+    'edges': merged_edges,
+    'hyperedges': merged_hyperedges,
+    'input_tokens': sem.get('input_tokens', 0),
+    'output_tokens': sem.get('output_tokens', 0),
+}
+Path('graphify-out/.graphify_extract.json').write_text(json.dumps(merged, indent=2))
+total = len(merged_nodes)
+edges = len(merged_edges)
+print(f'Merged: {total} nodes, {edges} edges ({len(ast[\"nodes\"])} AST + {len(sem[\"nodes\"])} semantic)')
+"
+```
+
+### Step 4 - Build graph, cluster, analyze, generate outputs
+
+**Before starting:** note whether `--directed` was given. If so, pass `directed=True` to `build_from_json()` in the code block below. This builds a `DiGraph` that preserves edge direction (source→target) instead of the default undirected `Graph`.
+
+```bash
+mkdir -p graphify-out
+$(cat graphify-out/.graphify_python) -c "
+import sys, json
+from graphify.build import build_from_json
+from graphify.cluster import cluster, score_all
+from graphify.analyze import god_nodes, surprising_connections, suggest_questions
+from graphify.report import generate
+from graphify.export import to_json
+from pathlib import Path
+
+extraction = json.loads(Path('graphify-out/.graphify_extract.json').read_text())
+detection  = json.loads(Path('graphify-out/.graphify_detect.json').read_text())
+
+G = build_from_json(extraction)
+communities = cluster(G)
+cohesion = score_all(G, communities)
+tokens = {'input': extraction.get('input_tokens', 0), 'output': extraction.get('output_tokens', 0)}
+gods = god_nodes(G)
+surprises = surprising_connections(G, communities)
+labels = {cid: 'Community ' + str(cid) for cid in communities}
+# Placeholder questions - regenerated with real labels in Step 5
+questions = suggest_questions(G, communities, labels)
+
+report = generate(G, communities, cohesion, labels, gods, surprises, detection, tokens, 'INPUT_PATH', suggested_questions=questions)
+Path('graphify-out/GRAPH_REPORT.md').write_text(report)
+to_json(G, communities, 'graphify-out/graph.json')
+
+analysis = {
+    'communities': {str(k): v for k, v in communities.items()},
+    'cohesion': {str(k): v for k, v in cohesion.items()},
+    'gods': gods,
+    'surprises': surprises,
+    'questions': questions,
+}
+Path('graphify-out/.graphify_analysis.json').write_text(json.dumps(analysis, indent=2))
+if G.number_of_nodes() == 0:
+    print('ERROR: Graph is empty - extraction produced no nodes.')
+    print('Possible causes: all files were skipped, binary-only corpus, or extraction failed.')
+    raise SystemExit(1)
+print(f'Graph: {G.number_of_nodes()} nodes, {G.number_of_edges()} edges, {len(communities)} communities')
+"
+```
+
+If this step prints `ERROR: Graph is empty`, stop and tell the user what happened - do not proceed to labeling or visualization.
+
+Replace INPUT_PATH with the actual path.
+
+### Step 5 - Label communities
+
+Read `graphify-out/.graphify_analysis.json`. For each community key, look at its node labels and write a 2-5 word plain-language name (e.g. "Attention Mechanism", "Training Pipeline", "Data Loading").
+
+Then regenerate the report and save the labels for the visualizer:
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import sys, json
+from graphify.build import build_from_json
+from graphify.cluster import score_all
+from graphify.analyze import god_nodes, surprising_connections, suggest_questions
+from graphify.report import generate
+from pathlib import Path
+
+extraction = json.loads(Path('graphify-out/.graphify_extract.json').read_text())
+detection  = json.loads(Path('graphify-out/.graphify_detect.json').read_text())
+analysis   = json.loads(Path('graphify-out/.graphify_analysis.json').read_text())
+
+G = build_from_json(extraction)
+communities = {int(k): v for k, v in analysis['communities'].items()}
+cohesion = {int(k): v for k, v in analysis['cohesion'].items()}
+tokens = {'input': extraction.get('input_tokens', 0), 'output': extraction.get('output_tokens', 0)}
+
+# LABELS - replace these with the names you chose above
+labels = LABELS_DICT
+
+# Regenerate questions with real community labels (labels affect question phrasing)
+questions = suggest_questions(G, communities, labels)
+
+report = generate(G, communities, cohesion, labels, analysis['gods'], analysis['surprises'], detection, tokens, 'INPUT_PATH', suggested_questions=questions)
+Path('graphify-out/GRAPH_REPORT.md').write_text(report)
+Path('graphify-out/.graphify_labels.json').write_text(json.dumps({str(k): v for k, v in labels.items()}))
+print('Report updated with community labels')
+"
+```
+
+Replace `LABELS_DICT` with the actual dict you constructed (e.g. `{0: "Attention Mechanism", 1: "Training Pipeline"}`).
+Replace INPUT_PATH with the actual path.
+
+### Step 6 - Generate Obsidian vault (opt-in) + HTML
+
+**Generate HTML always** (unless `--no-viz`). **Obsidian vault only if `--obsidian` was explicitly given** — skip it otherwise, it generates one file per node.
+
+If `--obsidian` was given:
+
+- If `--obsidian-dir <path>` was also given, use that path as the vault directory. Otherwise default to `graphify-out/obsidian`.
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import sys, json
+from graphify.build import build_from_json
+from graphify.export import to_obsidian, to_canvas
+from pathlib import Path
+
+extraction = json.loads(Path('graphify-out/.graphify_extract.json').read_text())
+analysis   = json.loads(Path('graphify-out/.graphify_analysis.json').read_text())
+labels_raw = json.loads(Path('graphify-out/.graphify_labels.json').read_text()) if Path('graphify-out/.graphify_labels.json').exists() else {}
+
+G = build_from_json(extraction)
+communities = {int(k): v for k, v in analysis['communities'].items()}
+cohesion = {int(k): v for k, v in analysis['cohesion'].items()}
+labels = {int(k): v for k, v in labels_raw.items()}
+
+obsidian_dir = 'OBSIDIAN_DIR'  # replace with --obsidian-dir value, or 'graphify-out/obsidian' if not given
+
+n = to_obsidian(G, communities, obsidian_dir, community_labels=labels or None, cohesion=cohesion)
+print(f'Obsidian vault: {n} notes in {obsidian_dir}/')
+
+to_canvas(G, communities, f'{obsidian_dir}/graph.canvas', community_labels=labels or None)
+print(f'Canvas: {obsidian_dir}/graph.canvas - open in Obsidian for structured community layout')
+print()
+print(f'Open {obsidian_dir}/ as a vault in Obsidian.')
+print('  Graph view   - nodes colored by community (set automatically)')
+print('  graph.canvas - structured layout with communities as groups')
+print('  _COMMUNITY_* - overview notes with cohesion scores and dataview queries')
+"
+```
+
+Generate the HTML graph (always, unless `--no-viz`):
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import sys, json
+from graphify.build import build_from_json
+from graphify.export import to_html
+from pathlib import Path
+
+extraction = json.loads(Path('graphify-out/.graphify_extract.json').read_text())
+analysis   = json.loads(Path('graphify-out/.graphify_analysis.json').read_text())
+labels_raw = json.loads(Path('graphify-out/.graphify_labels.json').read_text()) if Path('graphify-out/.graphify_labels.json').exists() else {}
+
+G = build_from_json(extraction)
+communities = {int(k): v for k, v in analysis['communities'].items()}
+labels = {int(k): v for k, v in labels_raw.items()}
+
+if G.number_of_nodes() > 5000:
+    print(f'Graph has {G.number_of_nodes()} nodes - too large for HTML viz. Use Obsidian vault instead.')
+else:
+    to_html(G, communities, 'graphify-out/graph.html', community_labels=labels or None)
+    print('graph.html written - open in any browser, no server needed')
+"
+```
+
+### Step 7 - Neo4j export (only if --neo4j or --neo4j-push flag)
+
+**If `--neo4j`** - generate a Cypher file for manual import:
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import sys, json
+from graphify.build import build_from_json
+from graphify.export import to_cypher
+from pathlib import Path
+
+G = build_from_json(json.loads(Path('graphify-out/.graphify_extract.json').read_text()))
+to_cypher(G, 'graphify-out/cypher.txt')
+print('cypher.txt written - import with: cypher-shell < graphify-out/cypher.txt')
+"
+```
+
+**If `--neo4j-push <uri>`** - push directly to a running Neo4j instance. Ask the user for credentials if not provided:
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import sys, json
+from graphify.build import build_from_json
+from graphify.cluster import cluster
+from graphify.export import push_to_neo4j
+from pathlib import Path
+
+extraction = json.loads(Path('graphify-out/.graphify_extract.json').read_text())
+analysis   = json.loads(Path('graphify-out/.graphify_analysis.json').read_text())
+G = build_from_json(extraction)
+communities = {int(k): v for k, v in analysis['communities'].items()}
+
+result = push_to_neo4j(G, uri='NEO4J_URI', user='NEO4J_USER', password='NEO4J_PASSWORD', communities=communities)
+print(f'Pushed to Neo4j: {result[\"nodes\"]} nodes, {result[\"edges\"]} edges')
+"
+```
+
+Replace `NEO4J_URI`, `NEO4J_USER`, `NEO4J_PASSWORD` with actual values. Default URI is `bolt://localhost:7687`, default user is `neo4j`. Uses MERGE - safe to re-run without creating duplicates.
+
+### Step 7b - SVG export (only if --svg flag)
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import sys, json
+from graphify.build import build_from_json
+from graphify.export import to_svg
+from pathlib import Path
+
+extraction = json.loads(Path('graphify-out/.graphify_extract.json').read_text())
+analysis   = json.loads(Path('graphify-out/.graphify_analysis.json').read_text())
+labels_raw = json.loads(Path('graphify-out/.graphify_labels.json').read_text()) if Path('graphify-out/.graphify_labels.json').exists() else {}
+
+G = build_from_json(extraction)
+communities = {int(k): v for k, v in analysis['communities'].items()}
+labels = {int(k): v for k, v in labels_raw.items()}
+
+to_svg(G, communities, 'graphify-out/graph.svg', community_labels=labels or None)
+print('graph.svg written - embeds in Obsidian, Notion, GitHub READMEs')
+"
+```
+
+### Step 7c - GraphML export (only if --graphml flag)
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import json
+from graphify.build import build_from_json
+from graphify.export import to_graphml
+from pathlib import Path
+
+extraction = json.loads(Path('graphify-out/.graphify_extract.json').read_text())
+analysis   = json.loads(Path('graphify-out/.graphify_analysis.json').read_text())
+
+G = build_from_json(extraction)
+communities = {int(k): v for k, v in analysis['communities'].items()}
+
+to_graphml(G, communities, 'graphify-out/graph.graphml')
+print('graph.graphml written - open in Gephi, yEd, or any GraphML tool')
+"
+```
+
+### Step 7d - MCP server (only if --mcp flag)
+
+```bash
+python3 -m graphify.serve graphify-out/graph.json
+```
+
+This starts a stdio MCP server that exposes tools: `query_graph`, `get_node`, `get_neighbors`, `get_community`, `god_nodes`, `graph_stats`, `shortest_path`. Add to Claude Desktop or any MCP-compatible agent orchestrator so other agents can query the graph live.
+
+To configure in Claude Desktop, add to `claude_desktop_config.json`:
+```json
+{
+  "mcpServers": {
+    "graphify": {
+      "command": "python3",
+      "args": ["-m", "graphify.serve", "/absolute/path/to/graphify-out/graph.json"]
+    }
+  }
+}
+```
+
+### Step 8 - Token reduction benchmark (only if total_words > 5000)
+
+If `total_words` from `graphify-out/.graphify_detect.json` is greater than 5,000, run:
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import json
+from graphify.benchmark import run_benchmark, print_benchmark
+from pathlib import Path
+
+detection = json.loads(Path('graphify-out/.graphify_detect.json').read_text())
+result = run_benchmark('graphify-out/graph.json', corpus_words=detection['total_words'])
+print_benchmark(result)
+"
+```
+
+Print the output directly in chat. If `total_words <= 5000`, skip silently - the graph value is structural clarity, not token compression, for small corpora.
+
+---
+
+### Step 9 - Save manifest, update cost tracker, clean up, and report
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import json
+from pathlib import Path
+from datetime import datetime, timezone
+from graphify.detect import save_manifest
+
+# Save manifest for --update
+detect = json.loads(Path('graphify-out/.graphify_detect.json').read_text())
+save_manifest(detect['files'])
+
+# Update cumulative cost tracker
+extract = json.loads(Path('graphify-out/.graphify_extract.json').read_text())
+input_tok = extract.get('input_tokens', 0)
+output_tok = extract.get('output_tokens', 0)
+
+cost_path = Path('graphify-out/cost.json')
+if cost_path.exists():
+    cost = json.loads(cost_path.read_text())
+else:
+    cost = {'runs': [], 'total_input_tokens': 0, 'total_output_tokens': 0}
+
+cost['runs'].append({
+    'date': datetime.now(timezone.utc).isoformat(),
+    'input_tokens': input_tok,
+    'output_tokens': output_tok,
+    'files': detect.get('total_files', 0),
+})
+cost['total_input_tokens'] += input_tok
+cost['total_output_tokens'] += output_tok
+cost_path.write_text(json.dumps(cost, indent=2))
+
+print(f'This run: {input_tok:,} input tokens, {output_tok:,} output tokens')
+print(f'All time: {cost[\"total_input_tokens\"]:,} input, {cost[\"total_output_tokens\"]:,} output ({len(cost[\"runs\"])} runs)')
+"
+rm -f graphify-out/.graphify_detect.json graphify-out/.graphify_extract.json graphify-out/.graphify_ast.json graphify-out/.graphify_semantic.json graphify-out/.graphify_analysis.json graphify-out/.graphify_labels.json
+rm -f graphify-out/.needs_update 2>/dev/null || true
+```
+
+Tell the user (omit the obsidian line unless --obsidian was given):
+```
+Graph complete. Outputs in PATH_TO_DIR/graphify-out/
+
+  graph.html            - interactive graph, open in browser
+  GRAPH_REPORT.md       - audit report
+  graph.json            - raw graph data
+  obsidian/             - Obsidian vault (only if --obsidian was given)
+```
+
+If graphify saved you time, consider supporting it: https://github.com/sponsors/safishamsi
+
+Replace PATH_TO_DIR with the actual absolute path of the directory that was processed.
+
+Then paste these sections from GRAPH_REPORT.md directly into the chat:
+- God Nodes
+- Surprising Connections
+- Suggested Questions
+
+Do NOT paste the full report - just those three sections. Keep it concise.
+
+Then immediately offer to explore. Pick the single most interesting suggested question from the report - the one that crosses the most community boundaries or has the most surprising bridge node - and ask:
+
+> "The most interesting question this graph can answer: **[question]**. Want me to trace it?"
+
+If the user says yes, run `/graphify query "[question]"` on the graph and walk them through the answer using the graph structure - which nodes connect, which community boundaries get crossed, what the path reveals. Keep going as long as they want to explore. Each answer should end with a natural follow-up ("this connects to X - want to go deeper?") so the session feels like navigation, not a one-shot report.
+
+The graph is the map. Your job after the pipeline is to be the guide.
+
+---
+
+## Interpreter guard for subcommands
+
+Before running any subcommand below (`--update`, `--cluster-only`, `query`, `path`, `explain`, `add`), check that `.graphify_python` exists. If it's missing (e.g. user deleted `graphify-out/`), re-resolve the interpreter first:
+
+```bash
+if [ ! -f graphify-out/.graphify_python ]; then
+    GRAPHIFY_BIN=$(which graphify 2>/dev/null)
+    if [ -n "$GRAPHIFY_BIN" ]; then
+        PYTHON=$(head -1 "$GRAPHIFY_BIN" | tr -d '#!')
+        case "$PYTHON" in *[!a-zA-Z0-9/_.-]*) PYTHON="python3" ;; esac
+    else
+        PYTHON="python3"
+    fi
+    mkdir -p graphify-out
+    "$PYTHON" -c "import sys; open('graphify-out/.graphify_python', 'w').write(sys.executable)"
+fi
+```
+
+## For --update (incremental re-extraction)
+
+Use when you've added or modified files since the last run. Only re-extracts changed files - saves tokens and time.
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import sys, json
+from graphify.detect import detect_incremental, save_manifest
+from pathlib import Path
+
+result = detect_incremental(Path('INPUT_PATH'))
+new_total = result.get('new_total', 0)
+print(json.dumps(result, indent=2))
+Path('graphify-out/.graphify_incremental.json').write_text(json.dumps(result))
+if new_total == 0:
+    print('No files changed since last run. Nothing to update.')
+    raise SystemExit(0)
+print(f'{new_total} new/changed file(s) to re-extract.')
+"
+```
+
+If new files exist, first check whether all changed files are code files:
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import json
+from pathlib import Path
+
+result = json.loads(open('graphify-out/.graphify_incremental.json').read()) if Path('graphify-out/.graphify_incremental.json').exists() else {}
+code_exts = {'.py','.ts','.js','.go','.rs','.java','.cpp','.c','.rb','.swift','.kt','.cs','.scala','.php','.cc','.cxx','.hpp','.h','.kts','.lua','.toc'}
+new_files = result.get('new_files', {})
+all_changed = [f for files in new_files.values() for f in files]
+code_only = all(Path(f).suffix.lower() in code_exts for f in all_changed)
+print('code_only:', code_only)
+"
+```
+
+If `code_only` is True: print `[graphify update] Code-only changes detected - skipping semantic extraction (no LLM needed)`, run only Step 3A (AST) on the changed files, skip Step 3B entirely (no subagents), then go straight to merge and Steps 4–8.
+
+If `code_only` is False (any changed file is a doc/paper/image): run the full Steps 3A–3C pipeline as normal.
+
+Then:
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import sys, json
+from graphify.build import build_from_json
+from graphify.export import to_json
+from networkx.readwrite import json_graph
+import networkx as nx
+from pathlib import Path
+
+# Load existing graph
+existing_data = json.loads(Path('graphify-out/graph.json').read_text())
+G_existing = json_graph.node_link_graph(existing_data, edges='links')
+
+# Load new extraction
+new_extraction = json.loads(Path('graphify-out/.graphify_extract.json').read_text())
+G_new = build_from_json(new_extraction)
+
+# Prune nodes from deleted files
+incremental = json.loads(Path('graphify-out/.graphify_incremental.json').read_text())
+deleted = set(incremental.get('deleted_files', []))
+if deleted:
+    to_remove = [n for n, d in G_existing.nodes(data=True) if d.get('source_file') in deleted]
+    G_existing.remove_nodes_from(to_remove)
+    print(f'Pruned {len(to_remove)} ghost nodes from {len(deleted)} deleted file(s)')
+
+# Merge: new nodes/edges into existing graph
+G_existing.update(G_new)
+print(f'Merged: {G_existing.number_of_nodes()} nodes, {G_existing.number_of_edges()} edges')
+" 
+```
+
+Then run Steps 4–8 on the merged graph as normal.
+
+After Step 4, show the graph diff:
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import json
+from graphify.analyze import graph_diff
+from graphify.build import build_from_json
+from networkx.readwrite import json_graph
+import networkx as nx
+from pathlib import Path
+
+# Load old graph (before update) from backup written before merge
+old_data = json.loads(Path('graphify-out/.graphify_old.json').read_text()) if Path('graphify-out/.graphify_old.json').exists() else None
+new_extract = json.loads(Path('graphify-out/.graphify_extract.json').read_text())
+G_new = build_from_json(new_extract)
+
+if old_data:
+    G_old = json_graph.node_link_graph(old_data, edges='links')
+    diff = graph_diff(G_old, G_new)
+    print(diff['summary'])
+    if diff['new_nodes']:
+        print('New nodes:', ', '.join(n['label'] for n in diff['new_nodes'][:5]))
+    if diff['new_edges']:
+        print('New edges:', len(diff['new_edges']))
+"
+```
+
+Before the merge step, save the old graph: `cp graphify-out/graph.json graphify-out/.graphify_old.json`
+Clean up after: `rm -f graphify-out/.graphify_old.json`
+
+---
+
+## For --cluster-only
+
+Skip Steps 1–3. Load the existing graph from `graphify-out/graph.json` and re-run clustering:
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import sys, json
+from graphify.cluster import cluster, score_all
+from graphify.analyze import god_nodes, surprising_connections
+from graphify.report import generate
+from graphify.export import to_json
+from networkx.readwrite import json_graph
+import networkx as nx
+from pathlib import Path
+
+data = json.loads(Path('graphify-out/graph.json').read_text())
+G = json_graph.node_link_graph(data, edges='links')
+
+detection = {'total_files': 0, 'total_words': 99999, 'needs_graph': True, 'warning': None,
+             'files': {'code': [], 'document': [], 'paper': []}}
+tokens = {'input': 0, 'output': 0}
+
+communities = cluster(G)
+cohesion = score_all(G, communities)
+gods = god_nodes(G)
+surprises = surprising_connections(G, communities)
+labels = {cid: 'Community ' + str(cid) for cid in communities}
+
+report = generate(G, communities, cohesion, labels, gods, surprises, detection, tokens, '.')
+Path('graphify-out/GRAPH_REPORT.md').write_text(report)
+to_json(G, communities, 'graphify-out/graph.json')
+
+analysis = {
+    'communities': {str(k): v for k, v in communities.items()},
+    'cohesion': {str(k): v for k, v in cohesion.items()},
+    'gods': gods,
+    'surprises': surprises,
+}
+Path('graphify-out/.graphify_analysis.json').write_text(json.dumps(analysis, indent=2))
+print(f'Re-clustered: {len(communities)} communities')
+"
+```
+
+Then run Steps 5–9 as normal (label communities, generate viz, benchmark, clean up, report).
+
+---
+
+## For /graphify query
+
+Two traversal modes - choose based on the question:
+
+| Mode | Flag | Best for |
+|------|------|----------|
+| BFS (default) | _(none)_ | "What is X connected to?" - broad context, nearest neighbors first |
+| DFS | `--dfs` | "How does X reach Y?" - trace a specific chain or dependency path |
+
+First check the graph exists:
+```bash
+$(cat graphify-out/.graphify_python) -c "
+from pathlib import Path
+if not Path('graphify-out/graph.json').exists():
+    print('ERROR: No graph found. Run /graphify <path> first to build the graph.')
+    raise SystemExit(1)
+"
+```
+If it fails, stop and tell the user to run `/graphify <path>` first.
+
+Load `graphify-out/graph.json`, then:
+
+1. Find the 1-3 nodes whose label best matches key terms in the question.
+2. Run the appropriate traversal from each starting node.
+3. Read the subgraph - node labels, edge relations, confidence tags, source locations.
+4. Answer using **only** what the graph contains. Quote `source_location` when citing a specific fact.
+5. If the graph lacks enough information, say so - do not hallucinate edges.
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import sys, json
+from networkx.readwrite import json_graph
+import networkx as nx
+from pathlib import Path
+
+data = json.loads(Path('graphify-out/graph.json').read_text())
+G = json_graph.node_link_graph(data, edges='links')
+
+question = 'QUESTION'
+mode = 'MODE'  # 'bfs' or 'dfs'
+terms = [t.lower() for t in question.split() if len(t) > 3]
+
+# Find best-matching start nodes
+scored = []
+for nid, ndata in G.nodes(data=True):
+    label = ndata.get('label', '').lower()
+    score = sum(1 for t in terms if t in label)
+    if score > 0:
+        scored.append((score, nid))
+scored.sort(reverse=True)
+start_nodes = [nid for _, nid in scored[:3]]
+
+if not start_nodes:
+    print('No matching nodes found for query terms:', terms)
+    sys.exit(0)
+
+subgraph_nodes = set()
+subgraph_edges = []
+
+if mode == 'dfs':
+    # DFS: follow one path as deep as possible before backtracking.
+    # Depth-limited to 6 to avoid traversing the whole graph.
+    visited = set()
+    stack = [(n, 0) for n in reversed(start_nodes)]
+    while stack:
+        node, depth = stack.pop()
+        if node in visited or depth > 6:
+            continue
+        visited.add(node)
+        subgraph_nodes.add(node)
+        for neighbor in G.neighbors(node):
+            if neighbor not in visited:
+                stack.append((neighbor, depth + 1))
+                subgraph_edges.append((node, neighbor))
+else:
+    # BFS: explore all neighbors layer by layer up to depth 3.
+    frontier = set(start_nodes)
+    subgraph_nodes = set(start_nodes)
+    for _ in range(3):
+        next_frontier = set()
+        for n in frontier:
+            for neighbor in G.neighbors(n):
+                if neighbor not in subgraph_nodes:
+                    next_frontier.add(neighbor)
+                    subgraph_edges.append((n, neighbor))
+        subgraph_nodes.update(next_frontier)
+        frontier = next_frontier
+
+# Token-budget aware output: rank by relevance, cut at budget (~4 chars/token)
+token_budget = BUDGET  # default 2000
+char_budget = token_budget * 4
+
+# Score each node by term overlap for ranked output
+def relevance(nid):
+    label = G.nodes[nid].get('label', '').lower()
+    return sum(1 for t in terms if t in label)
+
+ranked_nodes = sorted(subgraph_nodes, key=relevance, reverse=True)
+
+lines = [f'Traversal: {mode.upper()} | Start: {[G.nodes[n].get(\"label\",n) for n in start_nodes]} | {len(subgraph_nodes)} nodes']
+for nid in ranked_nodes:
+    d = G.nodes[nid]
+    lines.append(f'  NODE {d.get(\"label\", nid)} [src={d.get(\"source_file\",\"\")} loc={d.get(\"source_location\",\"\")}]')
+for u, v in subgraph_edges:
+    if u in subgraph_nodes and v in subgraph_nodes:
+        d = G.edges[u, v]
+        lines.append(f'  EDGE {G.nodes[u].get(\"label\",u)} --{d.get(\"relation\",\"\")} [{d.get(\"confidence\",\"\")}]--> {G.nodes[v].get(\"label\",v)}')
+
+output = '\n'.join(lines)
+if len(output) > char_budget:
+    output = output[:char_budget] + f'\n... (truncated at ~{token_budget} token budget - use --budget N for more)'
+print(output)
+"
+```
+
+Replace `QUESTION` with the user's actual question, `MODE` with `bfs` or `dfs`, and `BUDGET` with the token budget (default `2000`, or whatever `--budget N` specifies). Then answer based on the subgraph output above.
+
+After writing the answer, save it back into the graph so it improves future queries:
+
+```bash
+$(cat graphify-out/.graphify_python) -m graphify save-result --question "QUESTION" --answer "ANSWER" --type query --nodes NODE1 NODE2
+```
+
+Replace `QUESTION` with the question, `ANSWER` with your full answer text, `SOURCE_NODES` with the list of node labels you cited. This closes the feedback loop: the next `--update` will extract this Q&A as a node in the graph.
+
+---
+
+## For /graphify path
+
+Find the shortest path between two named concepts in the graph.
+
+First check the graph exists:
+```bash
+$(cat graphify-out/.graphify_python) -c "
+from pathlib import Path
+if not Path('graphify-out/graph.json').exists():
+    print('ERROR: No graph found. Run /graphify <path> first to build the graph.')
+    raise SystemExit(1)
+"
+```
+If it fails, stop and tell the user to run `/graphify <path>` first.
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import json, sys
+import networkx as nx
+from networkx.readwrite import json_graph
+from pathlib import Path
+
+data = json.loads(Path('graphify-out/graph.json').read_text())
+G = json_graph.node_link_graph(data, edges='links')
+
+a_term = 'NODE_A'
+b_term = 'NODE_B'
+
+def find_node(term):
+    term = term.lower()
+    scored = sorted(
+        [(sum(1 for w in term.split() if w in G.nodes[n].get('label','').lower()), n)
+         for n in G.nodes()],
+        reverse=True
+    )
+    return scored[0][1] if scored and scored[0][0] > 0 else None
+
+src = find_node(a_term)
+tgt = find_node(b_term)
+
+if not src or not tgt:
+    print(f'Could not find nodes matching: {a_term!r} or {b_term!r}')
+    sys.exit(0)
+
+try:
+    path = nx.shortest_path(G, src, tgt)
+    print(f'Shortest path ({len(path)-1} hops):')
+    for i, nid in enumerate(path):
+        label = G.nodes[nid].get('label', nid)
+        if i < len(path) - 1:
+            edge = G.edges[nid, path[i+1]]
+            rel = edge.get('relation', '')
+            conf = edge.get('confidence', '')
+            print(f'  {label} --{rel}--> [{conf}]')
+        else:
+            print(f'  {label}')
+except nx.NetworkXNoPath:
+    print(f'No path found between {a_term!r} and {b_term!r}')
+except nx.NodeNotFound as e:
+    print(f'Node not found: {e}')
+"
+```
+
+Replace `NODE_A` and `NODE_B` with the actual concept names from the user. Then explain the path in plain language - what each hop means, why it's significant.
+
+After writing the explanation, save it back:
+
+```bash
+$(cat graphify-out/.graphify_python) -m graphify save-result --question "Path from NODE_A to NODE_B" --answer "ANSWER" --type path_query --nodes NODE_A NODE_B
+```
+
+---
+
+## For /graphify explain
+
+Give a plain-language explanation of a single node - everything connected to it.
+
+First check the graph exists:
+```bash
+$(cat graphify-out/.graphify_python) -c "
+from pathlib import Path
+if not Path('graphify-out/graph.json').exists():
+    print('ERROR: No graph found. Run /graphify <path> first to build the graph.')
+    raise SystemExit(1)
+"
+```
+If it fails, stop and tell the user to run `/graphify <path>` first.
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import json, sys
+import networkx as nx
+from networkx.readwrite import json_graph
+from pathlib import Path
+
+data = json.loads(Path('graphify-out/graph.json').read_text())
+G = json_graph.node_link_graph(data, edges='links')
+
+term = 'NODE_NAME'
+term_lower = term.lower()
+
+# Find best matching node
+scored = sorted(
+    [(sum(1 for w in term_lower.split() if w in G.nodes[n].get('label','').lower()), n)
+     for n in G.nodes()],
+    reverse=True
+)
+if not scored or scored[0][0] == 0:
+    print(f'No node matching {term!r}')
+    sys.exit(0)
+
+nid = scored[0][1]
+data_n = G.nodes[nid]
+print(f'NODE: {data_n.get(\"label\", nid)}')
+print(f'  source: {data_n.get(\"source_file\",\"unknown\")}')
+print(f'  type: {data_n.get(\"file_type\",\"unknown\")}')
+print(f'  degree: {G.degree(nid)}')
+print()
+print('CONNECTIONS:')
+for neighbor in G.neighbors(nid):
+    edge = G.edges[nid, neighbor]
+    nlabel = G.nodes[neighbor].get('label', neighbor)
+    rel = edge.get('relation', '')
+    conf = edge.get('confidence', '')
+    src_file = G.nodes[neighbor].get('source_file', '')
+    print(f'  --{rel}--> {nlabel} [{conf}] ({src_file})')
+"
+```
+
+Replace `NODE_NAME` with the concept the user asked about. Then write a 3-5 sentence explanation of what this node is, what it connects to, and why those connections are significant. Use the source locations as citations.
+
+After writing the explanation, save it back:
+
+```bash
+$(cat graphify-out/.graphify_python) -m graphify save-result --question "Explain NODE_NAME" --answer "ANSWER" --type explain --nodes NODE_NAME
+```
+
+---
+
+## For /graphify add
+
+Fetch a URL and add it to the corpus, then update the graph.
+
+```bash
+$(cat graphify-out/.graphify_python) -c "
+import sys
+from graphify.ingest import ingest
+from pathlib import Path
+
+try:
+    out = ingest('URL', Path('./raw'), author='AUTHOR', contributor='CONTRIBUTOR')
+    print(f'Saved to {out}')
+except ValueError as e:
+    print(f'error: {e}', file=sys.stderr)
+    sys.exit(1)
+except RuntimeError as e:
+    print(f'error: {e}', file=sys.stderr)
+    sys.exit(1)
+"
+```
+
+Replace `URL` with the actual URL, `AUTHOR` with the user's name if provided, `CONTRIBUTOR` likewise. If the command exits with an error, tell the user what went wrong - do not silently continue. After a successful save, automatically run the `--update` pipeline on `./raw` to merge the new file into the existing graph.
+
+Supported URL types (auto-detected):
+- YouTube / any video URL → audio downloaded via yt-dlp, transcribed to `.txt` on next run (requires `pip install 'graphifyy[video]'`)
+- Twitter/X → fetched via oEmbed, saved as `.md` with tweet text and author
+- arXiv → abstract + metadata saved as `.md`
+- PDF → downloaded as `.pdf`
+- Images (.png/.jpg/.webp) → downloaded, Claude vision extracts on next run
+- Any webpage → converted to markdown via html2text
+
+---
+
+## For --watch
+
+Start a background watcher that monitors a folder and auto-updates the graph when files change.
+
+```bash
+python3 -m graphify.watch INPUT_PATH --debounce 3
+```
+
+Replace INPUT_PATH with the folder to watch. Behavior depends on what changed:
+
+- **Code files only (.py, .ts, .go, etc.):** re-runs AST extraction + rebuild + cluster immediately, no LLM needed. `graph.json` and `GRAPH_REPORT.md` are updated automatically.
+- **Docs, papers, or images:** writes a `graphify-out/needs_update` flag and prints a notification to run `/graphify --update` (LLM semantic re-extraction required).
+
+Debounce (default 3s): waits until file activity stops before triggering, so a wave of parallel agent writes doesn't trigger a rebuild per file.
+
+Press Ctrl+C to stop.
+
+For agentic workflows: run `--watch` in a background terminal. Code changes from agent waves are picked up automatically between waves. If agents are also writing docs or notes, you'll need a manual `/graphify --update` after those waves.
+
+---
+
+## For git commit hook
+
+Install a post-commit hook that auto-rebuilds the graph after every commit. No background process needed - triggers once per commit, works with any editor.
+
+```bash
+graphify hook install    # install
+graphify hook uninstall  # remove
+graphify hook status     # check
+```
+
+After every `git commit`, the hook detects which code files changed (via `git diff HEAD~1`), re-runs AST extraction on those files, and rebuilds `graph.json` and `GRAPH_REPORT.md`. Doc/image changes are ignored by the hook - run `/graphify --update` manually for those.
+
+If a post-commit hook already exists, graphify appends to it rather than replacing it.
+
+---
+
+## For native CLAUDE.md integration
+
+Run once per project to make graphify always-on in Claude Code sessions:
+
+```bash
+graphify claude install
+```
+
+This writes a `## graphify` section to the local `CLAUDE.md` that instructs Claude to check the graph before answering codebase questions and rebuild it after code changes. No manual `/graphify` needed in future sessions.
+
+```bash
+graphify claude uninstall  # remove the section
+```
+
+---
+
+## Honesty Rules
+
+- Never invent an edge. If unsure, use AMBIGUOUS.
+- Never skip the corpus check warning.
+- Always show token cost in the report.
+- Never hide cohesion scores behind symbols - show the raw number.
+- Never run HTML viz on a graph with more than 5,000 nodes without warning the user.

+ 1 - 0
skills/gstack-upgrade/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/gstack-upgrade/SKILL.md

+ 1 - 0
skills/guard/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/guard/SKILL.md

+ 0 - 23
skills/health/SKILL.md

@@ -1,23 +0,0 @@
----
-name: health
-description: Run setup diagnostic — check symlinks, plugins, permissions, token budget
-argument-hint: (no arguments needed)
-disable-model-invocation: true
-allowed-tools: Bash
----
-
-Run the health check script:
-
-```bash
-bash $HOME/.claude/doctor.sh 2>/dev/null || {
-  REPO=$(dirname "$(readlink "$HOME/.claude/CLAUDE.md" 2>/dev/null)" 2>/dev/null)
-  bash "$REPO/doctor.sh"
-}
-```
-
-After displaying the doctor.sh output:
-- **CRITICAL token (>30%)** → suggest `/plugin-check` to disable unused plugins; list the heaviest ones.
-- **WARNING token (>15%)** → note which toggle plugins are active and not needed.
-- **Errors (symlinks, agents)** → show the exact fix command (`bash link.sh`).
-- **Warnings only** → confirm setup is functional, note any action recommended.
-- **All pass** → confirm healthy and operational.

+ 1 - 0
skills/health/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/health/SKILL.md

+ 1 - 0
skills/investigate/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/investigate/SKILL.md

+ 1 - 0
skills/land-and-deploy/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/land-and-deploy/SKILL.md

+ 1 - 0
skills/learn/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/learn/SKILL.md

+ 1 - 0
skills/office-hours/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/office-hours/SKILL.md

+ 1 - 0
skills/open-gstack-browser/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/open-gstack-browser/SKILL.md

+ 1 - 0
skills/pair-agent/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/pair-agent/SKILL.md

+ 1 - 0
skills/plan-ceo-review/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/plan-ceo-review/SKILL.md

+ 1 - 0
skills/plan-design-review/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/plan-design-review/SKILL.md

+ 1 - 0
skills/plan-devex-review/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/plan-devex-review/SKILL.md

+ 1 - 0
skills/plan-eng-review/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/plan-eng-review/SKILL.md

+ 1 - 0
skills/qa-only/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/qa-only/SKILL.md

+ 1 - 0
skills/qa/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/qa/SKILL.md

+ 1 - 0
skills/retro/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/retro/SKILL.md

+ 1 - 0
skills/review/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/review/SKILL.md

+ 1 - 0
skills/setup-browser-cookies/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/setup-browser-cookies/SKILL.md

+ 1 - 0
skills/setup-deploy/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/setup-deploy/SKILL.md

+ 1 - 0
skills/ship/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/ship/SKILL.md

+ 1 - 0
skills/unfreeze/SKILL.md

@@ -0,0 +1 @@
+/home/bchanot-ubuntu/Documents/claude/skills-external/gstack/unfreeze/SKILL.md