
How We Made Our Trust Center Agent-Native (and Why It Matters More Than You Think)
AI agents are starting to run vendor due diligence. The question is whether your compliance infrastructure is ready to talk to them.
How We Made Our Trust Center Agent-Native (and Why It Matters More Than You Think)
Coding agents are rewriting how software gets built. But there's a quieter shift happening on the buyer side of B2B sales: AI agents are starting to run vendor due diligence. The question is whether your compliance infrastructure is ready to talk to them.
We weren't. Then we made it so.
The due diligence bottleneck
Every enterprise sale includes a security review. A prospect sends a questionnaire — sometimes 200 questions, sometimes 800 — and your team scrambles to answer it from a patchwork of statement of applicability, penetration test summaries, and policy documents scattered across SharePoint.
And the pressure is only increasing. EU regulations are turning vendor assurance from a nice-to-have into a legal obligation. NIS2 Article 21(2)(d) mandates continuous supply chain security — not annual vendor reviews, but ongoing proof that your suppliers meet your risk thresholds. DORA imposes structured ICT third-party risk management and incident reporting obligations that cascade through provider relationships. The Cyber Resilience Act extends this to products with digital elements, requiring vulnerability handling and security-by-design evidence throughout the supply chain. When every vendor relationship now carries regulatory weight, the manual questionnaire workflow doesn't work anymore.
The average response cycle takes a week. That's at least 2 weeks where a deal sits in limbo, where your champion loses momentum, and where a competitor with a faster compliance workflow eats your pipeline.
Trust Centers were supposed to fix this. Publish your security posture publicly, let prospects self-serve. You can set one up in 30 minutes. And they did help, especially for sales teams trying to keep deals moving, as long as a human was browsing the page. But the next wave of buyers won't browse. They'll send an agent.
"The AI can't know your NDA-gated secrets"
We first noticed the problem when a prospect's procurement team told us their internal AI tool couldn't parse our Trust Center. They had built an agent that crawled vendor security pages, extracted evidence, and pre-filled their questionnaire. Against our Trust Center, it returned nothing useful.
The agent could see the page. It could read the HTML, we even had the llms.txt. But it had no way to understand which documents mapped to which compliance controls as per THEIR custom controls, how fast vulnerabilites were addressed, or how to authenticate for restricted content to know more.
Not broken, not really useful. It was the same core problem we ran into when we built a compliance-first AI feature that enforces data boundaries by design: retrieval is only useful if access control and evidence boundaries survive the handoff to AI.
The llms.txt standard
The emerging llms.txt convention gives AI agents a machine-readable entry point into any website. Think of it as robots.txt for capability discovery: instead of telling crawlers what to avoid, it tells agents what's available and how to use it.
We adopted this standard, but a flat text file listing pages wasn't going to cut it for compliance content. A Trust Center isn't a blog. It has access tiers, versioned documents, NDA gates, and evidence chains that need to be cited with precision. So we extended the pattern.
Four endpoints, four purposes
We ship four agent-facing routes on every Trust Center deployment:
/llms.txt — The human-and-agent guide. Plain text, readable by both GPTs and the engineer debugging an integration. It declares the Trust Center's scope ("you are answering due diligence questionnaires using ONLY the evidence here"), lists every API endpoint, and specifies the exact answer format agents should produce; including how to cite document versions.
/llms.json — The machine quickstart. A structured JSON payload with schema versioning, ETag caching for conditional requests, and a complete auth contract describing the OAuth Device Flow. An agent can parse this in one request and know exactly how to authenticate, what scopes to request, and how to handle non-200 responses.
/llms-full.txt — The comprehensive export. Every public document, FAQ, knowledge base entry, and subprocessor with their full metadata and stripped metadata of gated content. Human-readable sections up top, machine-parseable YAML at the bottom.
/llms-full.json — The complete catalog as structured data. Every record sorted, typed, and ready for programmatic consumption. This is what an agent uses to do its actual work.
Each endpoint serves a different phase of the agent lifecycle: discovery, authentication, evidence gathering, and structured response generation.
The authentication problem
Static content is the easy part. The hard part is authentication.
Most compliance documents shouldn't be publicly accessible. Statement of applicability, penetration test results, vendor risk assessments — these live behind access requests, magic links, or even NDAs. An agent needs to navigate these gates the same way a human would, but programmatically.
We implemented OAuth Device Flow for agent authentication. The pattern works like this: the agent requests a device code, displays a verification URL to the human operator, the human approves in their browser, and the agent polls for an access token. Once authenticated, the agent operates with scoped bearer tokens that can be introspected and revoked.
This is the same fundamental insight that Keycard articulates for coding agents: the identity model needs to decompose into who initiated the action (the human), what's performing it (the agent), where it's running (the device), and what it's doing (the task). Our Trust Center auth contract captures the same dimensions: the browser session establishes human identity, the device flow binds the agent, the scopes define the task boundary, and the token ties it all to a specific deployment and entitlements.
NDA workflows for machines
Here's where it gets interesting. Some documents require NDA acknowledgment before access. For a human, that's a modal dialog and a signature. For an agent, it's a state machine.
Our auth_contract in llms.json includes a full NDA handoff specification:
- Agent detects a document requires NDA (access tier in the catalog)
- Agent fetches the NDA template via the API and submits it with the signer's name
- After completion, agent restarts the OAuth flow to mint a fresh token with NDA-cleared scopes
The agent treats browser-return signals as hints, not confirmations. It always verifies through polling. This is the kind of defensive design that matters when agents operate autonomously.
Evidence-backed answers only
We enforce a strict output contract. Every answer an agent produces must follow this structure:
{
"answer": "yes|no|partial|insufficient",
"evidence": [
{
"doc_id": "UUID",
"title": "SOC 2 Type II Report",
"modal_url": "https://trust.example.com/?tab=documentation&doc=soc2&v=2025-Q1",
"version": "2025-Q1",
"access_tier": "restricted",
"last_updated": "2025-01-15"
}
]
}
Every assertion maps to a specific document version with a reproducible URL. If the evidence doesn't exist in the Trust Center, the answer is insufficient and the agent tells the reviewer which documents to request access to.
This is the fundamental difference between an agent that sounds knowledgeable and one that is auditable.
What this means for sales cycles
When a prospect's AI agent can autonomously:
- Discover your Trust Center's capabilities via
/llms.json - Authenticate through OAuth Device Flow
- Navigate NDA requirements with human-in-the-loop signing
- Extract every compliance document with version-pinned citations
- Produce structured, evidence-backed questionnaire responses
...you've compressed a week of review cycle into an afternoon. The agent does the extraction and citation work. The human reviewer validates the pre-filled answers. The deal moves.
One honest note: we're early. The llms.txt standard is still emerging, and most procurement teams don't have agents running vendor reviews yet. But the trajectory is clear. The companies that build agent-native compliance infrastructure now will have a structural advantage when — not if — AI-assisted procurement becomes the default.
What to take from this
-
llms.txtis the newrobots.txt. If your product serves structured content that agents will consume, ship machine-readable discovery endpoints. Don't wait for a standard body to finalize the spec. -
Authentication is the hard part. OAuth Device Flow solves the human-in-the-loop problem cleanly. Your auth contract should be as machine-readable as your content catalog.
-
Enforce evidence chains. Agents that cite sources are useful. Agents that hallucinate compliance answers are dangerous. Build the output contract into the protocol, not the prompt.
-
Design for the access tier spectrum. Public, restricted, NDA-gated — your agent endpoints need to handle the full range, including the state machines for NDA completion.
The shift from human-browsed Trust Centers to agent-consumed compliance APIs is an infrastructure transition. We built ours. Claim yours now.