How Chainguard Deploys by Digest
Because it's critically important to know exactly what is running in your environment, we believe you should deploy by digest. This leaves no opportunity for a malicious or accidental tag push to sneak its way in and immediately roll out to production.
But digests are made for computers, not humans. The fundamental problem with digests is that they don't carry semantic information that humans need to understand what the image "is". The solution is automation.
At Chainguard we practice what we preach and deploy by digest, with image references by digest in our monorepo's source tree. As an illustration, here's a sample Deployment manifest:
The bot just does string matching, so it works equally well on YAML (Kubernetes manifests and GitHub Actions workflows), Dockerfiles, shell scripts, and more.
This means that our source always refers to image artifacts by digest, but they can still follow a tag's updates. Changes aren't merged to the monorepo without passing tests, and any failures can be isolated and changed in the pull request. If we discover an image update is bad for whatever reason after the pull request is merged (e.g., it fails in staging), we can revert the digest update in question and deploy the old digest for the tag. The digest will be updated on the next digesta-bot run, or we can manually babysit it to production if needed.
This also means that digests are updated on a scheduled basis, and pass tests, and are reviewed and approved by a human before they reach production. This necessarily takes time, and it means that it can be days or longer for us to roll out an upstream update. That's fine! Depending on your organization it can take much longer, and that's fine too! The most important thing is that updates are automated and happen as often as possible for your organization.
It's Chainguard's responsibility to provide image updates as fast as possible, so that you can pick them up as fast as you can. If it took us two weeks to get an update released, that means it could never take less than two weeks for you to release it.
Digests Aren't Enough
Using digests wherever possible, and using automation to update those digests, are an important part of making your supply chain secure, but they're not the whole story.
Digests describe the complete state of a container image, immutably, provably, forever. That's good; you want to be able to have a provable shorthand for the state of a thing running in production, but you also need more.
There are other questions you'll need to answer about your images, and Chainguard provides other tools and APIs to help answer them. Questions like:
Chainguard provides signed SBOM attestations for all our images, which list all the packages and versions of all the software in the image.
Like most registry implementations, Chainguard emits webhook events when images are pushed, including when tags are updated. You can subscribe to these messages and take action, or periodically check for tag updates, whichever matches your team's workflow best.
I'd be surprised if you could! We couldn't do it ourselves, and we don't even try. We use automation and frequent releases to stay on top of updates. If we didn't release as frequently, we'd rely on the features described above to give us an idea of what's waiting for us in the undeployed releases – what package updates (the potential risk) and what vulnerability fixes (the potential reward).
The hope is that we can likewise provide other teams the information necessary to make informed decisions, even automatically at scale.
We're not done improving this experience for our users. Software updates are complex and necessarily constitute risk, but software updates also represent value when they fix vulnerabilities and provide other improvements. Being able to provably identify the unique contents of an image, and see its history, and understand the differences between two images, are critical components in making the latest updates maximally valuable, while minimizing risk.