Agent Score evaluates how well your docs serve AI coding agents like Cursor, Claude and ChatGPT.
Developed by Fern Labs in partnership with Dachary Carey
Why it matters
AI coding agents are reading your API docs millions of times a day. If your docs aren't optimized for these agents, you're invisible to the fastest-growing segment of your user base.
Agent Score is the first industry benchmark for AI-agent readiness. Think Lighthouse, but for how effectively AI agents can discover, parse, and use your documentation. 0 to 100, 21 checks, 8 categories.
API platforms with higher agent readability are already seeing outsized adoption. Agent readiness isn't a nice-to-have. It's the new SEO. 73% of top API docs still lack an llms.txt.
Built on the open-source Agent-Friendly Docs Spec with no gatekeeping, no black boxes. Every check is transparent and community-driven.
How the top API documentation sites score on agent-readiness
A comprehensive framework for evaluating agent-readiness
5 checks
Can agents discover and parse your documentation index?
2 checks
Can agents get clean markdown instead of bloated HTML?
3 checks
Will your pages fit in an agent's context window?
3 checks
Are tabs, headers, and code fences agent-parseable?
2 checks
Do your URLs resolve cleanly without traps?
1 check
Can agents find your llms.txt from any page?
3 checks
Is your agent-facing content fresh and accurate?
2 checks
Can agents access your docs without hitting auth walls?
Fern-powered docs are agent-ready by default.
Trusted by