Securing the ML supply chain with new Chainguard AI Images
While popular AI languages like TensorFlow, PyTorch and Kubeflow are great for data science use cases, they can pose challenges when being deployed into security critical environments because of their large size, attack surface, and package management issues.
Today, we are announcing a new Chainguard Images AI bundle that will include a comprehensive collection of images for stages in the AI workload lifecycle, from development images to workflow management tools, to vector databases for production storage. As with all of our Images, they include critical software supply chain security features like software signatures, SBOMs and CVE remediation. At Chainguard, we believe that developers shouldn’t have to choose between easy-to-use and secure, and our new collection of AI Images is another step in that direction. Today’s new Chainguard Images AI bundle includes:
Python, Conda, OpenAI and Jupyter notebook images for developing models and using the OpenAI API,
Kubeflow images for deploying production ML pipelines to Kubernetes-based platforms,
Milvus and Weaviate vector database images for data storage.
Take preventative steps to tackle AI supply chain threats
More than 53 percent of organizations are aggressively pursuing large language applications, and Docker recently reported an astounding 100 million pull requests for AI/ML Images on Docker Hub. As the recent PoisonGPT attack technique demonstrated just how ripe LLMs are for security exploits, raising concerns about what the inevitable large-scale exploit could look like when it hits the LLM software supply chain. AntidoteGPT was introduced soon after the PoisonGPT article findings to prevent this type of attack vector by applying techniques from Chainguard Images and the Sigstore project.
so PoisonGPT was a thing last week that attacked the supply chain of an LLM. AntidoteGPT is this week's solution!https://t.co/TRZMukeWx1
— Yasyf Mohamedali (@yasyf) July 17, 2023
I took the same technique used by @projectsigstore/@chainguard_dev and applied it to @huggingface's safetensors. No more untrusted models! https://t.co/vGc64nr2cQ
Large Language Models (LLMs) provide a powerful primitive that can be used to automate many tasks in our daily lives with shocking accuracy and efficiency. Data science workflows are at a similar evolutionary phase as software development was 20 years ago, where no automated testing, code review, or even version control were the norm. Our Chainguard Labs team recently published lessons learned from securing container operating systems and how these principles apply to AI/ML infrastructure. As AI/ML workloads begin to move past chatbots and into more sensitive workloads, the security of the infrastructure they run on will begin to matter more and more. That’s why it was important to us to create a collection of hardened Chainguard Images dedicated to running these workloads.
How it works
As always, our Chainguard Images are based on Wolfi, our community un-distro, which means they’re hardened by default, a fraction of the size, and aim to meet our standard zero-known CVE SLA through daily updates and patching.

You can try Chainguard Images today to see for yourself how we're working to improve the container image landscape with a secure-by-default design. Our free and public Images are available on the :latest and :latest-dev: versions only. If you're interested in other versions in today’s AI Images bundle, reach out to our team for more information. Our Chainguard Images inventory is always expanding. If you need something you don’t see listed in our catalog, let us know.
Interested in seeing how we approach building our Images and what makes them more secure than the alternative options? Watch our CEO Dan Lorenc build Chainguard Images here:
Share this article
Related articles
- Product
Introducing New Updates to the Chainguard Images Directory
We've improved the Chainguard Images Directory with Helm charts for faster deployments, an ROI calculator, and more refreshed data to improve your experience.
Ron Norman, Director of UX and Design, and Julian Vermette, Principal Software Engineer
- Product
Introducing the Self-Serve Catalog Experience
Chainguard launches the Self-Serve Experience for Catalog customers: instantly add, rename, or remove container images from our catalog, no tickets required.
Tony Camp, Staff Product Manager
- Product
Custom Assembly Updates: Create Multiple, Customized Variants of a Chainguard Container
Customize Chainguard Containers with the latest Custom Assembly update. You can create, edit, and manage secure, zero-CVE image variants directly in the console.
Tony Camp, Staff Product Manager
- Product
Class in Session: Chainguard Contributes to the Higher Education Community
Catch up on what Chainguard is doing with higher education institutions to advance open source security and build the next generation of innovation.
Ewan Simpson, Higher Education Advocate, and SJ Cushing, Field Marketing Manager, Higher Education
- Product
Secure and Free MinIO Chainguard Containers
MinIO pulled its free images—but Chainguard has you covered. Get zero-CVE, continuously built MinIO and MinIO Client containers, free and secure from Chainguard.
Manfred Moser, Senior Principal Developer Relations Engineer, Dimitri John Ledkov, Senior Principal Software Engineer, Lisa Tagliaferri, Senior Director, Developer Enablement, and Aaditya Jain, Senior Product Marketing Manager
- Product
Chainguard Libraries for Python: Now Generally Available with CVE Remediation and Malware Protection
Chainguard Libraries for Python, trusted open source language libraries designed for CVE remediation and malware protection, is now generally available.
Bria Giordano, Director, Product Marketing, and Anushka Iyer, Product Marketing Manager