Honest assessment: llms.txt is a proposed standard by Jeremy Howard
(Answer.AI, Sept 2024). No major AI crawler has publicly confirmed they
extract content via /llms.txt. A Search Engine Land study (2025) found
8 of 9 sites saw no measurable traffic change after adoption.
Why include it anyway:
Do not promise ranking gains. Frame as "no-regret hedge", not "quick win".
/llms.txt — root of domain. Index of your content in markdown./llms-full.txt — root of domain. Full text of your most important pages
concatenated. Optional but recommended for docs/blog/knowledge base.Both MUST be reachable over HTTPS, content-type text/plain or
text/markdown, and NOT blocked in robots.txt.
# <Site or Project Name>
> <One-sentence elevator pitch. This is the single line AI systems extract
> as your site summary. Be concrete. Include entity + category + differentiator.>
<Optional free-form paragraph providing more context. Keep under 400 chars.>
## Docs
- [Getting started](https://example.com/docs/getting-started): What it does, how to install.
- [API reference](https://example.com/docs/api): All endpoints with examples.
- [Tutorials](https://example.com/docs/tutorials): Step-by-step walkthroughs.
## Examples
- [Quickstart example](https://example.com/examples/quickstart.md): Minimal working demo.
## Optional
- [Changelog](https://example.com/changelog.md): Version history.
- [Blog](https://example.com/blog/index.md): In-depth articles.
# <Name> (H1 with project/site name).> summary (blockquote, one sentence).## Docs, ## Examples, ## Optional, etc.[Title](URL): description. — description under 120 chars..md version of the page is preferred.llms-full.txt.Concatenation of the full text (stripped of nav/footer/ads) of your most important pages. Separator between pages:
---
URL: https://example.com/docs/getting-started
Title: Getting Started
---
<full markdown content of that page>
---
URL: https://example.com/docs/api
Title: API Reference
---
<full markdown content of that page>
Target under 500 KB. If your corpus is larger, trim to highest-value pages (most-linked, most-traffic, most-updated).
Best practice: generate both files at build time from the same source as your regular pages. Examples:
Astro: add a src/pages/llms.txt.ts endpoint:
import { getCollection } from 'astro:content';
export async function GET() {
const docs = await getCollection('docs');
const body = [
'# My Project',
'',
'> One-sentence pitch.',
'',
'## Docs',
...docs.map(d => `- [${d.data.title}](https://example.com/docs/${d.slug}): ${d.data.description}`),
].join('\n');
return new Response(body, { headers: { 'Content-Type': 'text/plain' } });
}
Next.js App Router: app/llms.txt/route.ts:
export async function GET() {
// similar — pull from your CMS/MDX/db
return new Response(body, { headers: { 'Content-Type': 'text/plain' } });
}
Hugo: custom output format llms → llms.txt template in layouts.
Use a plugin OR a cron job that regenerates files weekly. Flag stale files (older than site content) in audits.
Hand-maintained file. Flag in audits if older than 90 days.
llms-txt-action (GitHub Action) — generates on each deployllmstxt-hub — community directory of examples/llms.txt over HTTPStext/plain or text/markdown/sitemap.xml? Optional, debated/robots.txt