Article
If an agent has to scrape your chrome to find the thing your system already knows, your interface is doing extra work for no reason.
The public contract is the boring part that makes a site legible: llms.txt, sitemap.md, markdown twins, canonical APIs, truthful metadata, and one source projected everywhere.
Authors: Joel Hooks (https://joelhooks.com) — author
Publisher: gremlin-cms
Published: 2026-03-08T00:00:00.000Z
Updated: 2026-03-08T00:00:00.000Z
Most AI-heavy sites still treat the machine-facing layer like an afterthought. The HTML page gets all the love. Everything else is a shrug. Then an agent shows up and has to scrape chrome to find the thing the system already knows.
The public contract is the boring layer that makes the rest of the site legible: llms.txt, sitemap.md, markdown twins, /api/content/:idOrSlug, /api/search, canonical links, and JSON-LD that actually matches the resource. If those surfaces drift, the product surface drifts.
This is not separate from the website. It is the interface. A page is for people. A markdown twin is for cheap, direct reading. The content API is for structured retrieval. Search is for discovery. Same source, different projections. One source or it turns into bullshit fast.
The runtime validator is the rude friend in the room. A route file existing in the repo is not enough. The live page can still lie about Vary: Accept, fail content negotiation, or drift away from the markdown and API body. Static checks catch missing files. Runtime checks catch transport bugs.
That is the bet in Gremlin. Publish once. Keep the public contract boring. Make the machine path explicit. Then the HTML page, the markdown twin, the JSON document, and the search result all point at the same thing instead of four cousins telling slightly different stories.