This Shit Is Hard: SLSA L3 and Beyond
In previous iterations of our This Shit is Hard series, we’ve gone over things like the Chainguard Factory, Chainguard Libraries for Java, and our integrations with many scanner partners. Now, we’re talking SLSA!
Since day one, Chainguard has been championing SLSA. We believe this framework highlights many of the things we do best, and even if we go above-and-beyond SLSA requirements – which is the case for our reproducible images – it helps facilitate the conversation. Our involvement in SLSA goes way back, even before the company officially started, and we’ll continue to use it as one of the ways we hold ourselves accountable to our customers.
With the new SLSA 1.2 currently a release candidate, we thought it would be good to go back and review how Chainguard has leveraged the practical parts of SLSA to help our customers better understand how we provide provenant, isolated, and attestably secure containers.
We’re providing customers (and the public) secure, hardened container images, so it’s not uncommon to hear our engineers regularly talk about the risks of maintaining the trust of our customers and figuring out ways to constantly improve. SLSA, in our use case, is intended to be a vehicle for building trust and accountability.
In practice, SLSA is designed to help individuals and organizations measure and attest that their own software supply chain security efforts are in line with industry standards. We use SLSA to drive our own internal security posture and support our users in efficiently meeting their own SLSA compliance requirements.
We've previously written about and discussed Chainguard’s supply chain tech stack using melange
, apko
, and all the components that go into our Factory, but let's zoom in on some of the hard shit with a SLSA lens.
The Hard Chunks of SLSA Build Level 3
SLSA has evolved from aspirational guidelines (v0.1-0.2) to practical, implementable security controls in the latest v1.1. This latest iteration focuses on secure build levels (with the v1.2 RC1 draft adding source management requirements in the future). For SLSA Build Level 3, we see two requirements mattering the most:
True Isolation: Off-the-shelf containers won't cut it any more—you need strong, ephemeral execution and isolation that's minted per build.
Provenance Integrity: Complete, tamper-resistant attestations with signing secrets isolated from build (and test) processes.
As we go into more detail, you might ask yourself, “What’s the difference between this and something else like GitHub Hosted Runners? Why can’t you click the “SLSA 3 GHA” button?” GitHub Hosted runners are, in fact, (pretty) isolated VMs per job, which is a great property, and the SLSA 3 GHA and Workflows are great if you are building a single image or package. But at our scale (did I mention over 1,500 container images and 10,000 packages?), the level of security, controls, and observability that we expect from our build environment isn’t practical on GitHub and honestly isn’t what GitHub Actions was designed for. We still use GitHub runners for various tasks and take steps to secure them, but not when it comes to critical package build infrastructure.
To me, the jump from SLSA Level 2 to Level 3 isn’t usually about configuration changes—it requires actual infrastructure investment. Many build platforms can't achieve Build Level 3 out-of-the-box without fundamental architectural changes (I’m looking at you, Jenkins) or, let's be honest, a team that can spin why container-level isolation is "good enough" to meet the SLSA spec. Chainguard aligns with the security industry's consensus that containers do not provide a reliable security boundary for high risk workloads. So we’re doing the hard shit, which means constantly investing in our architecture to make it more hardened and more trustworthy.
SLSA L3 Build Isolation
The foundation of our build system rests on three core components working in harmony: Melange as our declarative package builder, what we call “Elastic Build”, our internal hypervisor builder/runner/orchestrator; and for Containers, our apko tool.
We talk about Melange a lot, and it's exciting for us to see other organizations and companies adopting it as the basis of their own declarative package building system. For that reason, we ensure that all improvements that we make (like defaulting to the QEMU runner) show up in the upstream F/OSS repo of melange. Not only does it provide us with a reproducible build pipeline, it's the cornerstone for efficiently managing zero CVEs across our now more than 1,500 images we provide.
Sorry, but Containers Are Not Enough
Containers are a security boundary, like a screen door prevents robberies. Both work fine as long as you don't expect anyone to try and forcefully get around them. (If you don't believe me, stop by our very own Natalie Somersall's DEFCON workshop on container breakouts and say hi.) Unfortunately, build infrastructure can be considered a complex remote code execution-as-a-service offering. We don't have the privilege of assuming nothing will try to attack us. It's part of our threat model that upstream code could and would try to attack build environments from the inside. As someone who has been on both the giving and receiving end of build system exploits, I can appreciate that these are almost always ripe for attack.
For us, we believe that hypervisors can provide a strong, hardware-backed isolation strategy, which is why every package builder is isolated by QEMU using our custom microVM.
A Melange of Pods, Hypervisors
Our architecture leverages Kubernetes as the foundation of our build infrastructure and the value it provides is being a common platform to integrate our tooling around—not for any level of strong security controls. Instead of treating pods as our isolation boundary—like I've seen way too many organizations do—we split execution further from within the pod into two separate microVMs.
Here's how it works in practice:

Inside each pod, we spin up multiple QEMU virtual machines—one for building and one for testing—where each is isolated from the other and the host system.
This layered approach means that even if an attacker compromises the build process within one VM, they’re less likely to affect the test environment, persist across builds, or access the pod's identity credentials. This has the added benefit of letting us securely build privileged packages, such as kuberentes-1.33 or iptables, that require certain capabilities like SETCAP during the build process.
VMs in Pods in Clouds
But don’t sleep on the importance of our cloud service providers; we’re leveraging the standard tools available to us today: managed Kubernetes clusters, cloud identity, block storage and managed container registries. But there’s one hard aspect to all of this – nested virtualization: the piece that lets us run a hypervisor on top of the CSP’s hypervisor. At the time of writing this, GKE is the only cloud (that we have partnered with) that provides managed Kubernetes nodes that support nested virtualization. Even then, the only nodes supported today are x86/64. So we have to come up with a different solution for our arm packages to achieve the same level of isolation as the rest.
Unlike GCP, our other cloud provider partners offer managed Kubernetes, but you’re required to manage your own bare-metal nodes, which are more expensive and have a lot more management overhead. We’ll likely eventually scale to do this, but let’s take on one hard thing at a time.
Kernels on Kernels on Kernels
One of the interesting bits to me is how we set up a secure build environment with a microVM by building and configuring our own secure Linux kernel with all the necessary information to verify its software supply chain. This is where having the internal expertise of folks like Luca DiMaio to build our own operating system and kernel comes into play. Not only do we produce secure Chainguard VMs as a product based on Chainguard OS, but we also use Chainguard OS as the basis for a Linux microVM inside of our hypervisor that, among other things, maintains the integrity of our build executions at all costs.
SLSA L3 Provenance: Proving the Unprovable
While the first part of SLSA Build L3 was straightforward to design, it was complicated to implement to our standards. Conversely, we've found that SLSA Provenance was complicated to design but straightforward to implement. We discussed repeatedly what actually should go into our provenance information with a particular focus on one of our core principles: Customer Obsession. So we would ask, "What do customers want?" And the answer was resounding: "meh."
That's not to say we gave up. In fact, we doubled down. We have to do the hard things. This is the right thing to do to help our customers understand and verify what we're doing on their behalf. Provenance data is the perfect example of “difficult and important, yet boring” engineering work.
For example, customers can log into our console today and see the latest version of provenance that we provide which focuses on providing our customers with the most relevant information that they need to verify packages themselves.
cosign download attestation --predicate-type=https://slsa.dev/provenance/v1 cgr.dev/{customername}/node-fips:latest | jq -r .payload | base64 -d | jq .predicate
But what was the point?
At some point, we should ask, "Why SLSA?" My response has always been that it gives you the flexibility to attest that what you're doing follows industry best practices without being yet another compliance checkbox. In the best case, it's a way of building milestones in your current software supply chain security programs or helping to describe what you should expect when consuming third-party dependencies in your organization. We have to admit, though, there’s a range of implementations out there. We aim to focus on our customers' needs, and our novel position within the industry right now to build a practical SLSA-approved implementation.
Hopefully, all of this conveys what you likely already know: that we take secure build, provenance, and all of the software supply chain security bits seriously. We’ll continue measuring SLSA Levels on our new product lines, Chainguard Libraries and Chainguard VMs, as they make their way to General Availability. Today we’re accountable to the levels of SLSA v1.1 and we’ll continue to support future SLSA releases.
Want to verify our claims? Try the cosign command above on any Chainguard container image, or check out our new SLSA documentation and SLSA course to dive deeper.
Ready to Lock Down Your Supply Chain?
Talk to our customer obsessed, community-driven team.