Tous les articles

Ship and patch doesn't cut it in the AI era

Dan Lorenc, Co-founder and CEO

For decades, software security followed a simple loop. Build. Ship. Patch what breaks. It was never perfect, but it worked well enough because the pace of change was human. Developers wrote code, security teams reviewed it, and attackers operated with similar limitations.

That world is over, and I'm not sure everyone realizes it yet.

AI took the brakes off writing code

Almost every engineering team is now using AI to write code. The productivity gains are real and not slowing down. But there's an easy-to-miss side effect.

AI coding tools don't just generate application logic. They pull in dependencies, suggest libraries, and assemble entire pieces of software from public sources — at a speed no human reviewer can match, and usually without verifying whether those dependencies can actually be trusted.

That means every sprint introduces more attack surface. More packages. More versions. More transitive dependencies. Security teams aren't reviewing tens of dependencies anymore. They're dealing with thousands, many of which arrived autonomously through AI-generated code.

The volume alone breaks the traditional model.

AI also changed how attacks work

At the same time, attackers are using AI to move faster, operate at scale, and lower the skill floor for executing an attack. What used to take weeks now takes hours. Vulnerabilities are discovered faster. Exploits are generated faster.

We're already seeing malware that rewrites itself to evade detection, largely autonomous attack campaigns, and new classes of threats like prompt injection and dependency-based attacks targeting the supply chain directly. The recent TeamPCP attacks on Trivy and LiteLLM are a perfect example. CI/CD workflows and GitHub Actions are also being hit.

Attacks aren't just getting more sophisticated. They're getting faster than the processes designed to stop them.

The gap is widening

Put those two shifts together, and you get a widening gap. On one side, AI is increasing the amount of code and dependencies entering your environment at an unprecedented rate. On the other, AI is reducing the time it takes to find and exploit weaknesses in that code.

The traditional response is to add more scanning, more automation, more patching. But all of those approaches share the same limitation: they're reactive. They start after the code has already entered your system. That worked when development was slower. It doesn't work when both code generation and attack orchestration are happening at machine speed.

You can't close that gap by reacting faster to something that's already too fast.

Why patching is no longer enough

Patching is built on a simple assumption: identify a problem after it appears and fix it before it gets exploited. That requires being able to see the problem, having time to respond, and the problem being well-defined.

AI breaks all three.

Not every issue has a known identifier. Malware doesn't come with a CVE. A compromised package can pass a clean scan if it isn't cataloged yet. The window between discovery and exploitation is shrinking — often measured in hours now. And when AI tools pull in dependencies automatically, you frequently don't know where a piece of code came from in the first place.

You still have to patch. Compliance requires it, and it's still worth doing efficiently. But it's not sufficient as a primary defense. Relying on it as your main strategy is a bet that you can move faster than both your developers and your attackers. Most teams can't win that bet.

The real problem is trust

We talk a lot about speed, but the core issue isn't speed. It's trust.

We built a software ecosystem where anyone can publish a package to a public registry with minimal verification. That model depended on the assumption that risk would be manageable and visible. AI blows that assumption up by amplifying both scale and opacity.

When an AI tool pulls in a dependency, the meaningful question isn't whether that dependency has known vulnerabilities. It's whether you trust where it came from in the first place. Scanning tells you what's known. It doesn't tell you if a package is what it claims to be. It doesn't prevent a malicious artifact from entering your environment.

That's why we're seeing attacks that exploit CI/CD workflows, naming, timing, and automation rather than traditional vulnerabilities. The attack surface has shifted upstream.

Surface area is the new security KPI

In a world shaped by AI, your security posture is defined less by how fast you can react and more by what you allow in. If that surface area grows with every sprint, you don't have a patching problem. You have a control problem.

At Assemble 2026, we focused on what fixing this actually looks like. As we build the trusted source for open source, we can't stop at container images. It has to extend across everything AI systems and developers touch — from OS packages and build pipelines to the artifacts and skills AI agents rely on. Rebuild software from source, continuously harden it, and enforce policy before anything reaches a developer or an agent. Control what's allowed to exist in your environment in the first place, instead of verifying what shows up.

Secure by default, continuously

Secure by default isn't a new idea, but it means something different now. It's not enough for something to be secure at a point in time. It has to stay secure as the environment around it changes. That requires a few things: software built from source in a reproducible way, continuous rebuilding and hardening, and verifiable provenance so you know exactly what you're running.

When you have that foundation, the role of security changes. Instead of chasing issues across an ever-growing dependency graph, you reduce the number of problems that can enter your system at all. That's how you keep up with AI-driven development without turning it into a liability.

Any solution that slows developers down will fail — that's always been true. The goal isn't to limit AI. It's to ensure that when AI is used, it draws from a trusted set of inputs. Teams move at full speed without accumulating hidden risk. Security becomes infrastructure, not an extra step.

If you take one thing away: in the AI era, you can’t secure what you build after the fact. You have to secure what you build from the start.

Anything else is just catching up.

Share this article

Articles connexes

Vous souhaitez en savoir plus sur Chainguard?

Contactez-nous