Home
Unchained
Product Blog

More Secure Vibes: Chainguard Academy’s AI Optimizations

Lisa Tagliaferri, Senior Director, Developer Enablement

Are you being enticed by the honeyed song of AI code assistants? Do you keep hearing them as Sirens’ melodious voices inviting you to otherworldly knowledge? But feel you must remain bound to your mast in trepidation that the code you ship will be riddled with security vulnerabilities? When an AI assistant suggests a container image or a config file, how can you be confident that you are doing your best to be as secure as possible?


To ensure that you navigate to your home terminal safely, we made our entire documentation ecosystem more AI-friendly. This supports accurate, up-to-date, and security-focused responses based on our documentation.


Now you can vibe with greater assurance because we have optimized our documentation and tutorial hub, Chainguard Academy, so that you and your crew can sail alongside the AI Sirens in confidence. You can tell them to take the beeswax out of their ears.


From Myth to Model: Our AI-First Documentation Approach


You may have already noticed some of our AI optimizations, but others are under the radar. We’ll walk you through the improvements we made so that you can best leverage our resources when you work with a code assistant.


The "Copy Markdown for LLMs" Button: Direct AI Integration


One of our most practical features is the "Copy Markdown for LLMs" button that appears on every documentation page. This feature has already proven invaluable to our customers who regularly work with AI assistants. With a single click, you can:


  • Copy the entire page content in clean, structured Markdown format.

  • Paste it directly into your AI assistant of choice.

  • Get immediate, context-aware help based on the exact documentation you’re reviewing.


Whether you’re debugging a complex multi-stage build or trying to understand supply chain attestations, the ability to instantly share full documentation context with your AI assistant means faster, more accurate solutions.


A screenshot of a Chainguard Academy webpage with the new "Copy Markdown for L L M s"
The "Copy Markdown for LLMs" button appears on every documentation page.

Machine-Readable Endpoints for AI Consumption


We've created dedicated endpoints that AI systems can easily parse and understand:


  • AI Sitemap (/ai-sitemap.json): A hierarchical navigation structure with metadata that helps AI understand our documentation organization.

  • Concept Maps (/concepts.json): Tag-based relationships showing how security concepts interconnect.

  • Plain Text Indexes (/llms.txt): Simplified content for quick AI parsing.

  • Structured Metadata (/ai-metadata.json): Site capabilities and statistics in machine-readable format.


Additionally, we have llms-full.txt that renders at build time. If you don’t want to choose a single documentation page via the button we described in the section above, you can grab this to share with your assistant directly.


Semantic HTML Enhancements


Every page in our documentation includes rich semantic markup that helps AI understand context:


<meta name="ai:content_structure" content="documentation">
<meta name="ai:endpoints" content="/ai-sitemap.json,/concepts.json,/llms.txt">

Code blocks include explicit language identification and contextual information, making it easier for AI to understand and suggest appropriate coding patterns with an emphasis on security.


AI Instructions and Discovery


We provide a comprehensive AI instructions document (/static/ai-instructions.md) that guides AI systems on how to:


  • Navigate our documentation structure.

  • Cite specific security recommendations with full URLs.

  • Understand our content types (tutorials, how-to guides, reference docs).

  • Link related security topics together.


This helps your AI assistant avoid the whirlpool of poor LLM ingestion through providing clear signposts as it cruises across Chainguard Academy.


Quality Controls for Accurate Information


To ensure AI systems receive consistent, accurate information:


  • Tag Validation: Pre-commit hooks enforce a standardized taxonomy of 50+ approved security and technology tags.

  • Technical Dictionary: Custom spell-checking with 149+ technical terms ensures consistent terminology.

  • Structured Content: Clear categorization of content types helps AI understand whether it's reading a security tutorial, a compliance guide, or an API reference.


While you may easily be able to parse that the misspelled “kubernets” refers to our favorite open source project named for the Greek word for helmsman, AI assistants don’t always make the same inference. Enforcing quality checks when contributors commit new docs helps ensure that our docs are easy to read for both human and machine.


Tools for the Voyage: Developer Gains Worth Singing About


Next time you ask your AI assistant, "How do I create a secure Node.js container?", you can point it to Chainguard Academy and be sure to receive the following back:


1. Access our structured documentation about Chainguard's Node.js images.

2. Understand the security benefits of our distroless approach.

3. Provide specific examples with proper context.

4. Link to related topics like SBOM generation and vulnerability scanning.

5. Suggest additional security best practices based on our comprehensive guides.


This AI-friendly approach delivers tangible security benefits:


  • Reduced CVE Exposure: AI assistants can accurately recommend Chainguard's minimal, frequently-updated container images.

  • Supply Chain Security: AI can explain and guide you through SLSA compliance and attestation verification.

  • Best Practices by Default: AI will guide you through making secure choices the easy choice.


Security Wins Without Killing the Vibe


This isn’t just about better docs. It’s part of a larger shift in how we deliver secure software—from how we build container images and libraries, to how we make that knowledge usable through AI. Taking a page from the Chainguard Factory’s real-time rebuild model, Chainguard Academy delivers up-to-date answers and careful guidance with security baked in that’s ready for pair programming with an AI assistant.


No patches. No shortcuts. Just source-built docs your AI can trust: structured, secure, and ready to ship code that won’t sink you.


---


Want to see it in action? Feed your coding assistant Chainguard Academy’s llms-full.txt file and ask your coding assistant something like: “What’s the most secure way to build a Node.js container with SBOMs and SLSA?” Then watch how it responds.

Share

Ready to Lock Down Your Supply Chain?

Talk to our customer obsessed, community-driven team.

Talk to an expert