Luck isn't a security control: What happened with mini Shai-Hulud and what you need to do
A new supply chain worm hit 400+ packages in the last 24 hours. Chainguard customers were not affected. What matters now is how your team adapts its security approach.
We joke internally that the sun has risen, so of course, there is another supply chain attack. The pace feels inconceivable, but every few days in 2026, another major supply chain attack has landed on public registries. The result is always the same: security and engineering teams spend a day (or evening) triaging whether they were hit, rotating credentials, and taking to social media to discuss the current state of open source. Mini Shai-Hulud is the third major wave of this attack type, and the honest question at this point isn’t whether you were affected, but whether your security posture is strong enough to immunize you against the next attack, because we know there will be a next time.
Every open source registry you pull from is an attack surface
The conversation across security teams has been about CVEs for so long that it's easy to treat what’s happening on npm and PyPI as a vulnerability management problem. These malware attacks are different. Danger spreads without a CVE. Malicious code is inserted during the build and distribution stages of package creation, using the same infrastructure that ships legitimate releases.
The (not so) Mini Shai-Hulud attack is a perfect example. TeamPCP exploited pull_request_target workflows in @Tanstack’s GitHub repo to inject malware into 84 different versions across 42 distinct packages. Every package had the same provenance as the hundreds of safe versions that were published before it.
The question worth sitting with is now "where in our development process are we pulling from public registries, and what is our basis for trusting them?" Unfortunately, we can’t depend on the “vibe” heuristics that made us feel protected: millions of downloads, thousands of GitHub stars, and hundreds of uncompromised versions.
For most teams, that answer spans a lot of ground. Libraries are the obvious ones, but GitHub Actions run with elevated permissions inside your CI/CD pipelines and pull from public sources. AI coding agents resolve dependencies from public registries during code generation and also use open source skills. Agent configuration files are now a documented attack surface, too. Any of these is a distribution channel an attacker can compromise, and they are getting really good at it.
AI tools are being targeted deliberately
The spread of this campaign to AI tooling is strategic. If you’re a pickpocket, are you going to spend your time on a quiet cul-de-sac or on a main pedestrian thorofare? Attackers are targeting AI tools because there are millions of popular targets and billions of potential victims. When you think about it from the attacker’s perspective, the upside is obvious.
AI tools are being adopted quickly across every major organization, from F500 enterprises to buzzy new startups. Each new tool is a way into your environment. Coding agents are spreading across engineering orgs at a similar clip. Each new agent-crafted pull request is a way in. AI is empowering non-technical teams to start shipping to production. Each new developer is a way in.
Unfortunately, all this builds in favor of attacks. More tools lead to more code. Add more developers who use all those tools. The result becomes clear: more ways to get attacked. As engineering teams move faster and rely on AI to build, everything gets harder to manage, audit, and secure.
Preventive security is the only approach that works
The best way to defend against the rolling wave of supply chain attacks this year is to end your reliance on public open source registries that are optimized for scale rather than security. Think of it like trading Limewire for iTunes in the early 2000s.
Concretely, it means pulling your open source dependencies from a trusted, secure source. Community GitHub Actions, npm, PyPI, and that random markdown file you found online and threw into your AI model of choice all served us well. Unfortunately, the old way of pulling open source has been unable to keep pace with the AI-assisted attacks that threaten everyone’s data, customers, and sanity. The 2026 security ethos is simple: every dependency you pull from a public source adds to your attack surface.
The attacks hitting the ecosystem right now are not going to slow down. If anything, the pace of the attacks right now is the slowest they will ever be from this point forward. The teams that come out of this period in solid security shape will be the ones who decided that their security posture could no longer rely on the luck of attackers missing them by minutes or hours.
The upside of preventative security is speed and focus
When luck is your primary security control, every next attack matters, as it could be the one that detonates inside your environment. New attacks are met with a new incident response channel, an apology text to the security engineer you’re bothering at 11pm to get online to pull apart dependency trees, and frustration that the team is back in triage mode and not building the next big thing.
No one wants their production code to be compromised, but it's also important to account for the cost of disruption. It takes a lot of mental energy and bandwidth to piece together if you are impacted and exactly what you should do; all energy is better spent developing a product that wins new customers.
The dead man switch in this mini Shai-Hulud attack is a great example. Everyone’s first instinct is to revoke the stolen token, but in this case, it would wipe the developer's entire directory. This changes remediation for every attack going forward. The next time your team needs to think: "Okay, was I impacted?" And then, if you were, you have to think: "Okay, how do I handle this without deleting years' worth of code?"
The silver lining is that when you shift to preventative security, you end the tradeoff between security and speed. You can build at AI speed without having to worry about AI attacks. Shifting left requires you to think about your entire software supply chain. Not one dependency, not one container image, not one agent skill, but everything that you're pulling from open source.
That’s what makes Chainguard different. We don't treat a single artifact as a commodity; instead, we see every link in the ever-growing software supply chain as an opportunity for an attacker to wreak havoc. We’ve built the largest hardened, production-ready open source catalog on the planet. No one is as focused on this problem as we are, and it's why our customers were safe with this attack, the next attack, and the one after that. That way, they can do what they do best, innovating, while we do what we do best: making sure they can do so safely.
Get in touch with our team to learn more about how Chainguard can help you secure your software supply chain.
Share this article
Related articles
- security
Cyber resiliency in practice: Lessons from recent supply chain attacks
Mike Behrmann, Director, Cyber Resiliency
- security
Chainguard artifacts safe from npm supply chain attack targeting SAP developer dependencies with 2.25M+ monthly downloads
Quincy Castro, CISO
- security
CMMC Phase 2, explained: Requirements, deadlines, and who’s affected
Philip Brooks, Senior Enterprise Solutions Engineer
- security
Mythos pulls zero-days forward. Here's what you need to know now.
Patrick Smyth, Principal Developer Relations Engineer
- security
Chainguard customers safe from elementary-data compromise
Quincy Castro, CISO
- security
Chainguard customers safe from new npm worm and xinference supply chain attack
Quincy Castro, CISO