New standard for AI agent discovery

Make your website readable
by every AI agent

Generate a spec-compliant llms.txt file in seconds. Tell ChatGPT, Claude, Perplexity, and every AI crawler what your site contains — in clean, structured Markdown.

No login required Instant download Spec-compliant output Token count included
Validates against the spec
llms.txt + llms-full.txt
Token count estimator
7 platform deploy guides
Free Generator

Build your llms.txt file

Fill in the fields below. Output maps directly to the open standard. Validation runs on generate.

Appears as: > Your description here — aim for under 160 characters.
Essential pages — AI agents prioritise these
Optional pages — secondary context
Spec valid llms.txt ~0 tokens
↓ Download

        
Spec Validation
Token budget0
Fits in:
How It Works

Three steps to AI discoverability

1

Generate your file

Fill in the form above with your site name, description, and key pages. We produce a properly formatted file in seconds.

2

Deploy to your root

Place the file at yoursite.com/llms.txt. See our platform guides for Vercel, Next.js, Nginx, and more.

3

AI agents find you

Crawlers like GPTBot and ClaudeBot discover your file and build a more accurate understanding of your site.

The Standard

What is the llms.txt format?

llms.txt is a Markdown file placed at your domain root. It was proposed in 2024 as an open community standard — documented at llmstxt.org — to help AI language models understand site content without parsing HTML.

Required Format
# Required: H1 with site name
# Your Site Name

> Required: one-line blockquote
> Brief description of your site.

## Optional H2 section headers
## Essential Content
- [Page Title](https://site.com/page): Description
- [About](https://site.com/about): Who we are

## Optional
- [Blog](https://site.com/blog): Articles
Validation Rules
ElementRuleStatus
# H1First line, site nameRequired
> blockquoteFollows H1, one sentenceRequired
## H2Section groupingsOptional
[text](url)Markdown linksOptional
File path/llms.txt at rootRequired
Content-Typetext/plainRequired

llms.txt vs robots.txt: robots.txt restricts what crawlers can access. llms.txt invites AI agents to understand your content. The two are complementary — you should have both. You can reference your llms.txt from robots.txt with a comment: # LLM-Content: /llms.txt

Platform Guides

Deploy in under 60 seconds

Copy-paste snippets for every major platform.

Place llms.txt in your /public folder. Vercel serves public files from the root automatically.

# Terminal
cp llms.txt ./public/llms.txt
git add . && git commit -m "feat: add llms.txt for AI discovery"
git push
# Verify: https://yoursite.com/llms.txt

Use a Route Handler in App Router to serve dynamically — useful if generating from a CMS.

// app/llms.txt/route.ts
export async function GET() {
  const content = `# Your Site
> One sentence about your site.

## Essential Content
- [Home](https://yoursite.com): Main page
- [Docs](https://yoursite.com/docs): Documentation
`;
  return new Response(content, {
    headers: { 'Content-Type': 'text/plain' }
  });
}
# nginx.conf
server {
    listen 443 ssl;
    server_name example.com;
    location = /llms.txt {
        root /var/www/html;
        add_header Content-Type text/plain;
        add_header Cache-Control "public, max-age=86400";
    }
}
# netlify.toml
[[headers]]
  for = "/llms.txt"
  [headers.values]
    Content-Type = "text/plain; charset=UTF-8"
    Cache-Control = "public, max-age=86400"

Add to functions.php. Stores content in a WordPress option you can edit from the dashboard.

// functions.php
add_action('init', function() {
    add_rewrite_rule('^llms\\.txt$', 'index.php?llms_txt=1', 'top');
});
add_filter('query_vars', fn($v) => [...$v, 'llms_txt']);
add_action('template_redirect', function() {
    if (get_query_var('llms_txt')) {
        header('Content-Type: text/plain; charset=UTF-8');
        echo get_option('llms_txt_content', '# My Site');
        exit;
    }
});

Shopify blocks arbitrary root files. Use a Cloudflare Worker to intercept and return your content.

// Cloudflare Worker
const LLMS = `# My Store
> Brief description of your store.
## Essential Content
- [Home](https://mystore.com): Main page
`;
export default {
  async fetch(req) {
    if (new URL(req.url).pathname === '/llms.txt')
      return new Response(LLMS, { headers: { 'Content-Type': 'text/plain' }});
    return fetch(req);
  }
};
# Place llms.txt in the repo root
# GitHub Pages serves root files automatically
cp llms.txt ./llms.txt
git add llms.txt
git commit -m "add: llms.txt for AI agent discovery"
git push origin main
# Verify: https://username.github.io/llms.txt
Live Checker

Does any site have llms.txt?

Paste a URL to check whether a site has a valid llms.txt and see its compliance score.

Known Adopters

Sites using llms.txt

Early adopters of the standard. The format is gaining traction across AI companies, developer tools, and SaaS products.

Anthropic
anthropic.com/llms.txt
Vercel
vercel.com/llms.txt
Stripe
stripe.com/llms.txt
Hugging Face
huggingface.co/llms.txt
fast.ai
fast.ai/llms.txt
Answer.AI
answer.ai/llms.txt
+ Add your site
Generate above →
FAQ

Common questions

Everything you need to know about the llms.txt standard and using this tool.

llms.txt is an open, community-driven standard for helping AI language model agents understand website content. It was proposed in 2024 and documented at llmstxt.org. The format is a plain Markdown file placed at your domain root. This tool is an independent implementation — it is not affiliated with or endorsed by any specification author.
robots.txt controls access — which pages crawlers can visit. sitemap.xml lists URLs for search engine indexing. llms.txt provides comprehension — it tells AI agents what your site is about in clean Markdown. You should have all three; they serve different purposes and complement each other.
Providing clean, structured Markdown at a canonical URL reduces ambiguity for AI crawlers. Rather than parsing your HTML layout, the agent gets an author-curated description. This reduces hallucination risk and improves accuracy of summaries and citations. Citation frequency depends on many other factors, but accuracy should improve.
llms.txt is the concise index — link map with descriptions, ideally under 2,000 tokens. llms-full.txt is the expanded version with complete documentation, API refs, or FAQs for agents that need deep context. Most sites only need llms.txt. Developer tools and knowledge-heavy products benefit from both.
Update it when your site structure or core content changes. For content sites, monthly is reasonable. Serve it with Cache-Control: public, max-age=86400 so crawlers re-fetch daily. Treat it like your About page — a living document, not a one-time setup.
No hard spec limit, but practical limits apply. llms.txt should be under 2,000 tokens to fit in any context window. llms-full.txt can be much larger for models with large windows. The token counter in our generator shows you exactly where you stand.