The US government recently passed a Bill in the House that would forbid the Department of Defense (DoD) from procuring any software applications that contain a single security vulnerability, or CVE. At first glance to someone outside the industry, it sounds perfectly fair to ban selling software with known vulnerabilities.
Why would you sell something vulnerable? And why would someone buy it? Especially an organization responsible for national security. But to anyone who has spent time looking at CVE scan results, this idea is just misguided at best and an impending sh*tshow at worst.
Why is it so hard to ship software without vulnerabilities? I think there are a few problems here that combine to make it untenable today.
Vulnerability data is bad. Like really, really bad. As an industry we have not figured out how to accurately score severity, measure impact and track known vulnerabilities in a way that is scalable. These challenges have been debated at length and there is a lot of good research further analyzing these problems. For example, researcher Jacques Chester, explains why the CVSS scoring mechanism (the industry standard for reporting how severe a vulnerability is) is completely broken, kafka-esque nightmare, and Josh Bressers explores the core problems and incentives that make the National Vulnerability Database (NVD) unworkable at scale.
Asset inventory is bad. The NVD contains entries mapping vulnerabilities to software, which is only useful if you know what software you have. I hate to use the phrase, but software has eaten the world and most organizations and individuals don't actually know what software they’re using. Sure you can name a few important packages or systems, but those are built on thousands of other even more important packages or systems. Plus, you’re almost definitely forgetting some. This is called “Shadow IT”, and according to Gartner up to 35% of IT spend was on software they didn’t know about.
Tooling is bad. Well, not all of it. Tools to find bugs are getting better, but the tools to fix those bugs aren't. The scale is massive, and we just haven’t kept up as an industry. This has been a frog-in-kettle problem where the amount of effort to triage vulnerabilities and apply patches has steadily increased every year without anyone noticing, but now we’re getting cooked alive. This challenge can be broken into parts. The first part: all code has bugs, so the more code we use, the more bugs there are. Some of those bugs cause security issues. So it stands to reason that there are more and more security issues found each year. The second part: we’re finding bugs faster than we can fix them. Automated tools like fuzzing, SAST, and DAST are improving rapidly, but our ability to fix them is still incredibly manual. This means vulnerabilities are rising quadratically rather than linearly over time - we find more bugs in the software we use, and we’re using more software.
Combined, these factors have put us at a breaking point. Even if we had accurate vulnerability information, we’d struggle to map it to our asset inventory because it’s inaccurate. But we don’t have accurate vulnerability information, so both sides of this equation are a mess. And each year the problem gets harder and harder.
Building software that is more secure
So what should we do here? There’s no silver bullet solution, we just need to do the hard work and fix it all. Thankfully it can be done in parallel and there are already programs under way. Efforts like the Global Security Database (GSD) and Open Source Vulnerabilities (OSV) hope to improve upon the NVD to make vulnerability information more accurate and scalable.
Software Bills of Materials, or SBOMs, will help with the asset inventory problem long term, although the immediate effect will be even more vulnerabilities to triage. Until we get accurate vulnerability information, this will probably feel like a step backwards, but I believe in the parallel Vulnerability Exploitability eXchange (VEX) effort to assist vendors in correcting inaccurate information in the short term.
The open source project, Sigstore, is a step in the direction of secure by default. Built with software developers in mind, this suite of tools helps to integrate security practices like signing and verifying code and other software artifacts into daily practices. The project is continuing to expand, with promising work coming out under the Gitsign tool so that Git commits can be signed and verified by all those collaborating on and using the project.
Finally, we just need to get better at building secure software. I don’t mean this in a “just try harder” way, we need to improve tooling and make software development secure by default. This starts by empowering developers and software engineers to rethink how they build software starting from a place of security rather than baking it in after. This will allow us to eliminate classes of security vulnerabilities from the start and help us get ahead of the curve. This will require advances in tooling and automation, rather than relying on manual effort from developers. Memory safe languages are a great example here, and the folks at the Prossimo project are leading the way to rewrite critical software in modern languages.
A cultural shift is underway in the way we secure software, and Chainguard is driving it.
At Chainguard, we are building tooling to help make building software that is secure by default an achievable goal for teams and organizations. With a developer first mindset, we are tackling each point on the software supply chain where vulnerabilities may arise and integrating these tools in meaningful ways that provide greater insight into the overall picture of software and dependencies that a company is both creating and leveraging. Our hope is that tools like these can become a standard practice in software development, and in turn provide a solution to increasing scrutiny around CVEs – by eliminating them.