LLMs.txt Research Paper (2026): Evidence Over Hype
This paper evaluates whether llms.txt actually improves AI discovery, crawling behavior, or visibility in LLM-generated answers. Instead of guessing, we tested it in the real world across multiple live websites over several months, tracking server logs, analytics, and downstream outcomes.
Preview or download the full PDF research paper
What you’ll learn: what llms.txt is supposed to do, how it compares to existing standards (like robots.txt, sitemaps, and structured data), what major search engines have cautioned about “LLM-only” content formats, and what our measurements show when llms.txt is implemented on production sites.
Why this matters: AI search and answer engines are changing how people find information, and marketers are under pressure to “optimize for LLMs.” This research helps separate useful technical work from busywork, so you can prioritize changes that actually move the needle.
Recommended next steps: treat llms.txt as experimental, focus first on user-facing content quality, crawlability, internal linking, and structured data, and use log-based measurement to validate any AI-visibility claims before investing time across your whole site.
About the researchers:
Chad Castilla (marketing implementation, metrics tracking, analysis, and authorship) |
LinkedIn
Napoleon Griffin (server-side implementation and technical execution)
Elizabeth Shemesh (final edits and publication review)





