Your riskiest supplier isn't a vendor. It's a registry.
Public package registries create a growing assurance gap in software supply chains. Building libraries from verified source gives APRA-regulated institutions a more defensible way to manage malware risk, operational resilience, and auditability.
Public package registries have become a foundational part of modern software delivery. Australian banks and other APRA-regulated institutions pull thousands of open source packages into their environments every week across customer-facing applications, fraud and analytics platforms, internal tooling, and core business systems.
Yet while organisations typically apply formal oversight to outsourced technology providers, the public registries that distribute much of that software, including npm, PyPI, and Maven Central, are often treated as background developer infrastructure rather than as meaningful third-party dependencies.
That assumption is becoming harder to defend.
As software supply chain attacks become more frequent and consequential — Trivy, LiteLLM, Shai-Hulud, Axios, to name a few from the last few months alone — public registries can no longer be treated as a convenience layer. They form part of the software distribution path through which code reaches production systems. That places them inside the trust boundary for modern software delivery.
For APRA-regulated entities, that raises an important question: where critical systems depend on software distributed through public registries, does that reliance create an assurance gap under CPS 230 and CPS 234?
Public package registries deliver convenience, not assurance
Public package registries solved a major distribution problem for software developers. They made it easy to publish, discover, and consume reusable code at internet scale. But they were not designed to provide the level of assurance that regulated institutions require.
In many cases, when a developer runs npm install, pip install, or pulls a Java dependency from Maven Central, they are consuming a pre-built artifact without independent proof that it was built from the original source code, in a controlled environment, by a trustworthy process.
That distinction matters.
If a published package cannot be reliably tied back to source and a trustworthy build process, organisations are trusting an artifact they did not build and cannot independently verify.
This is a structural gap that attackers routinely exploit.
Recent attacks show why this matters
The past few years have shown how attractive package ecosystems have become to attackers. Rather than compromise one organisation at a time, threat actors increasingly target the distribution points and developer workflows that can give them access to many organisations at once.
Recent high-profile examples illustrate the pattern:
In March 2026, malicious Trivy releases helped trigger a broader cross-ecosystem compromise, with downstream impacts across PyPI and npm; with a separate attack delivered via the npm package for axios, a widely used HTTP client with hundreds of millions of downloads a month.
In September 2025, the Shai-Hulud campaign used stolen maintainer credentials to publish malicious npm packages at scale and propagate through trusted developer workflows.
In March 2024, coordinated typosquatting campaigns on PyPI and namespace-related attacks targeting the Maven ecosystems demonstrated how easily dependency trust can be abused.
These incidents differ in technique, but they share a common feature: they exploit the fact that most organisations still trust what public registries distribute without requiring strong proof of origin, build integrity, and provenance.
This threat is only growing. As AI coding tools become more widely used, they introduce a new class of supply chain exposure, including so-called slopsquatting attacks, where attackers anticipate package names that coding assistants may hallucinate and publish malware under those names.
For security leaders, the implication is clear: the risk is no longer limited to developer error or isolated maintainer compromise. The package distribution layer itself has become part of the attack surface.
Cooldowns help, but they do not establish trust
One common response to registry-based attacks is to introduce a cooldown period, whether 24 hours, 7 days, or longer, before newly published package versions can be consumed. This is a sensible control. It can reduce exposure to malware that is quickly identified and reported by the community.
But cooldowns do not solve the underlying assurance problem.
They do not prove that a package matches its source code. They do not prove that it was built in a trustworthy environment. And they do not prevent an organisation from eventually consuming an artifact that was malicious from the outset, but simply remained undetected for longer than the cooldown window.
They also create tension with CVE management. In practice, organisations often need to move quickly to the latest patched version in order to remediate known vulnerabilities. A mandatory cooldown can delay that response, creating a trade-off between reducing exposure to newly published malware and promptly patching known security issues.
For regulated institutions, that distinction matters. Delaying trust is not the same as establishing trust.
Cooldowns should remain part of a layered defence, but they are not a substitute for stronger controls over how dependencies are sourced, built, and verified.
Building from source changes the model
A more defensible approach is to consume libraries that have been rebuilt from verified source code in a controlled build environment, with cryptographic evidence showing what was built, where it came from, and how it was built.
That is the model Chainguard Libraries provides across JavaScript, Python, and Java ecosystems.
Instead of relying on whatever artifact was uploaded to a public registry, Chainguard rebuilds packages from its upstream source repositories in a SLSA Level 3-compliant environment. Each package is delivered with Sigstore signatures, a signed software bill of materials (SBOM), and provenance attestations.
This changes the security and assurance model in important ways. If an attacker compromises a maintainer account and uploads a malicious package to a public registry, that does not automatically compromise an environment consuming packages rebuilt from verified source, because the uploaded artifact is no longer what is being trusted. If a typosquatted package has no legitimate upstream source repository, it cannot meet the requirements for trusted source-based rebuilding.
Just as importantly, this is not only a preventive control. It is also an evidentiary one.
For CISOs, risk teams, internal audit, and regulators, provenance, signatures, and SBOMs provide something public registries often cannot: independently verifiable evidence of software origin and build integrity.
Chainguard’s research has shown that the overwhelming majority of known malicious packages do not meet this standard because they lack publicly verifiable source code. In practical terms, that means a build-from-source model can eliminate a large class of software supply chain attacks before they reach enterprise build systems.
For example, when Chainguard reviewed this approach against approximately 3,025 known malicious Python packages, 98% would have been prevented simply by requiring that packages be buildable from verified source.
Why CPS 230 and CPS 234 make this a governance issue
For APRA-regulated institutions, that is not just a security improvement. It is also a governance and assurance question. Where public registries form part of the path through which software reaches critical systems, institutions need to ask whether the current level of oversight, assurance, and control is proportionate to the operational and information security risk.
Australian authorised deposit-taking institutions (ADIs), general insurers, life insurers, private health insurers, and superannuation entities are all subject to APRA’s prudential framework. Two standards are particularly relevant here: CPS 230 Operational Risk Management and CPS 234 Information Security.
What CPS 230 should prompt institutions to ask
CPS 230 does not explicitly name public package registries. It does not need to. The standard is principles-based and focuses on dependencies that support critical operations or expose the entity to material operational risk.
For many financial institutions, modern software delivery depends on public registries as part of the path code takes to enter production. That raises a legitimate governance question: where a compromise in that distribution layer could disrupt critical operations, should that dependency be assessed more explicitly under the entity’s operational risk framework?
For CISOs, CIOs, and operational risk leaders, the practical questions are:
Which critical systems rely on open source packages sourced from public registries?
What assurance exists that those packages are tied to verified source and trustworthy builds?
How would the institution evidence due diligence over that dependency if challenged by internal audit, the board, or APRA?
If the dependency is significant, is the current control model proportionate to the operational risk?
No single library download is material in isolation. But that framing misses the point of how materiality accrues. CPS 230 recognises that materiality may arise from an individual arrangement or from multiple arrangements taken together. The relevant question is not whether any one package matters, but whether the registry infrastructure delivering packages across critical systems has become a sufficiently important dependency that it warrants more explicit treatment. Whether that threshold is met is a judgment for each entity, but it is a question that should be asked deliberately rather than left implicit.
What CPS 234 should prompt institutions to assess
CPS 234 is even more directly relevant because it focuses on whether an entity’s information security capability is commensurate with the threats to its information assets, including where third parties are involved.
In the context of software supply chains, that shifts the conversation from “How much open source are we using in our critical systems?” to “What security capability and assurance support the third-party channels through which open source software enters our environment?”
Source code is an information asset. The software artifacts deployed into production are information assets. Where those artifacts are distributed through third-party registries or package ecosystems, organisations should consider whether the available security evidence and assurance are sufficient for the criticality of the systems involved.
This is where public registries often present an uncomfortable gap. The issue is not only that attacks occur, but that independently verifiable assurance remains inconsistent and limited. In many cases, there is little evidence that a package was built from the source it claims to represent, through a controlled, trustworthy process.
Standards such as PEP 740 support publicly verifiable provenance for Python packages, but adoption remains uneven even among popular projects, and far lower across the long tail. For compliance, audit, and security leaders, this can leave the criticality of dependencies high while available assurance remains relatively weak.
A more defensible approach improves both prevention and evidence: reducing reliance on uploaded artifacts while strengthening the organisation’s ability to demonstrate how software supply chain risk is being managed.
What a defensible posture looks like
A defensible posture for an APRA-regulated entity is not to eliminate open source. That would be neither realistic nor desirable. It is to consume open source through controls that better align with operational resilience, information security, and assurance expectations.
Building libraries from source is an especially valuable preventative control because it does more than detect compromise after the fact. Used alongside detective and corrective controls, it helps remove entire categories of software supply chain attacks before they reach the organisation’s environment. That durability is what makes it such a strong foundation for a more defensible approach.
In practice, that means five things.
First, recognise the dependency. Treat the software distribution and build layer behind open source consumption as a governance and assurance question, not merely as a developer workflow.
Second, require stronger proof. Prefer dependencies that are rebuilt from verified source and accompanied by provenance, signatures, and SBOMs, rather than relying solely on public registry artifacts.
Third, layer controls. Cooldown periods, dependency minimisation, scanning, and policy controls still matter. But they should complement, not replace, preventative measures that reduce the likelihood of compromised packages entering the build pipeline in the first place.
Fourth, improve evidentiary posture. Ensure the organisation can demonstrate, not merely assert, how it manages open source supply chain risk for critical systems.
Finally, integrate pragmatically. Stronger controls need to work with existing engineering and security processes. Chainguard Libraries can sit upstream of common artifact management platforms such as JFrog Artifactory, Sonatype Nexus, and Cloudsmith, allowing organisations to improve assurance without requiring major changes to developer workflows.
That matters because the best control is often the one that meaningfully improves risk posture without creating friction that teams will work around.
The framework already applies
The strength of APRA’s prudential framework is its principles-based approach. It does not need to name every modern technology dependency to be relevant to the risks it creates.
Where public registries form part of the path through which software reaches critical systems, institutions should assess them with the same seriousness they apply to other important technology dependencies.
Open source is already essential. The real question is whether the way it is sourced, verified, and governed is defensible.
For many APRA-regulated institutions, that will increasingly depend on whether they can move beyond trusting uploaded artifacts and towards consuming open source with verifiable provenance, source-based builds, and auditable evidence.
That is the gap Chainguard Libraries is designed to close.
Chainguard Libraries provides malware-resistant open source dependencies built from source for Java, Python, and JavaScript. Every package ships with SLSA provenance, Sigstore signatures, and signed SBOMs. Learn more at chainguard.dev/libraries.
Share this article
Related articles
- security
Malicious axios versions published to npm: Chainguard customers protected
Quincy Castro, CISO
- security
How to protect your organization from the telnyx PyPI compromise
Ross Gordon, Staff Product Marketing Manager, and Bria Giordano, Director, Product Management
- security
You were one pip install away from the litellm breach. Chainguard customers weren’t.
Ross Gordon, Staff Product Marketing Manager, and Bria Giordano, Director, Product Management
- security
Secure-by-default: Chainguard customers unaffected by the Trivy supply chain attack
Reid Tatoris, VP of Product
- security
Going deep: Upstream distros and hidden CVEs
Chainguard Research
- security
Chainguard + Second Front: A faster, more secure path into government markets
Ben Prouty, Principal Partner Sales Manager, Chainguard, and Veronica Lusetti, Senior Manager of Partnerships, Second Front