News

Government perspectives on software self-attestation requirements

Chainguard Team
June 15, 2023
copied

Last month, Chainguard CEO Dan Lorenc sat down for a virtual fireside chat with Chris Hughes, CISO & Co-founder Aquia and Fellow at the Cybersecurity and Infrastructure Security Agency (CISA), to discuss the agency’s draft secure self-attestation form, which is currently open for public comment. As a refresher, the Secure Software Development Attestation form was a mandatory component for software supply chain security requirements outlined in the Office of Management and Budget’s (OMB) Memorandum M-22-18 (September 2022). The Memorandum requires that all federal agencies and any third-party vendors that provide software for government systems or information meet the NIST SP 800-­218 and the NIST Software Supply Chain Security Guidance. On June 9, 2023 OMB updated its guidance on software development practices doubling down on using the SSDF as a north star for self-attestation requirements and offers important clarifications around who will be required to self-attest within the next year.

In an effort to help organizations navigate this complex regulatory environment, we’ve been publishing a series of blog posts and hosting virtual discussions about the self-attestation form and broader software supply chain security guidelines put forth in recent federal government guidelines. Our next virtual panel is happening on Tuesday, June 20 with CRob, Director of Security Communications at Intel, to provide industry perspective on software self attestations. Register here.

Progress toward a world where all software products are safe and secure by design is one of the top priorities for the global cybersecurity community. An edited transcript Q&A of Dan and Chris’ engaging conversation is below or watch the full recording: 

Panel transcript (edited version for readability) 

Dan Lorenc: Today we're going to be talking about the new software self-attestation form that came out from CISA. 

It’s been a busy couple years I guess in the regulation space particularly related to software transparency and cybersecurity. This form, the software attestation form just came out a few weeks ago in draft form. It's kind of the latest in a long series of updates that all stem from the one Executive Order 14028–which is an Executive Order from the Biden Administration on some cybersecurity practices that was also kind of a fallout of the attack on Solarwinds a few years ago.

So an Executive Order, as much as everyone wishes, you can't just wave a magic wand and declare that all software has to be secure. There's a lot of steps in place, a lot of interactions between different government agencies in order to get to a point where we can feel more confident in the software that we're building our government and our national security upon. So, that Executive Order came out, it directed a whole bunch of agencies to do a bunch of different things in a pretty intricate order which is hard to keep track of as an outsider. I'm sure it's even harder for the folks that are working on it day to day. One of the first things it did was it directed NIST to talk to the industry, talk to folks working in the space already, and then start to compile a list of secure software development best practices. So that came out a few years ago, the first draft, now called the Secure Software Development Framework (SSDF). It's a really long, really comprehensive draft of current industry standard best practices for secure software development as well as some aspirational things. And after the SSDF came out from NIST, nobody's forced to really do that–it's just a document. It's a long PDF that NIST produced, it's got some great information in there but without another kind of carrot or stick it's just kind of left there on the internet in PDF form for folks to read if they want to or if they're bored on an airplane. NIST can't just go and force people to follow these standards that they produce; they just publish it for everyone to read. But that gets us to where we are today where DHS and CISA put out this draft self-attestation form that organizations that sell software to the government will be required to sign going forward for the government to be able to continue using their software. So Chris, I know you wrote a blog piece recently about understanding the self-attestation form. I like your flow, I like your description on it–you also had some nice historical context of some other efforts that kind of followed a similar path here. So why don't you tell us a little bit about that. I think you had a great high level overview that we can jump into.

Chris Hughes: Definitely, I appreciate that and as you said there's been kind of a chain of events that have led to this point get us where we are to self-attestation form. Obviously you know we had Solarwinds and then the Executive Order 14028 Section 4 in particular focused heavily on software supply chain security. And then ironically enough, shortly after that was published, we had the Log4j incident which put more emphasis and concern around software supply chain security not just for proprietary software vendors but for open source software consumption. So we had the Executive Order, then we had the OMB memorandum M-22-18, which is also focused on software supply chain security and the government acquiring or obtaining secure software and basically some requirements around that, one of which led to this memo that has come out now from CISA that is this self-attestation form.

Basically, it's going to put software suppliers in a position if you sell software to the federal government, you're going to have to sign this self-attestation form whether you're the CEO of the firm or someone designated by the CEO to sign on their behalf attesting to a set of practices that they've derived from NIST’s SSDF, which of course draws on a lot of industry guidance from things like OWASP, SSAM and BSIM and other secure development frameworks and best practices. So it calls out some pretty unique and specific requirements in there it says that the form is essentially required for any software developed after September 14th 2022.

It's applicable to any existing software that's modified with a major version change, say from a 2.5 to 3.0 after that same date. It's also applicable to software that suppliers deliver via continuous changes–think of things like software as a service (SaaS), we've seen tremendous growth of organizations, including the federal government, consuming SaaS rather than self-hosted services. We see the push for things like DevOps and DevSecOps and using cloud native infrastructure and such. It also excludes some things, which is interesting. I think an excluded software developed by federal agencies is open source or FOSS software obtained directly by a federal agency. So rather than the open source software that a supplier puts into their software or their application or product or service for example, but if an agency is directly using open source software it doesn't apply to that nor if the software an agency develops directly themselves. That said, most agencies tend to procure a lot of software or work with third parties to develop software for them, so these are a pretty broad spectrum of entities underneath the purview of the self-attestation form. It also does some unique things when we think about open source software. We've seen tremendous growth of organizations using open source software and third-party code and it points out that suppliers using open source software in their products need to have to attest as part of this form, that they've taken steps to minimize their risk of using open source software in their product or software too, so it kind of doesn't let them off the hook for using open source software and still has some requirements around that.

Last but not least, a couple things that I want to point out is its able to be submitted via a website that they're going to release or via sending an email, which we can chat about later. I think this is an interesting way that they went about this, given the push we've seen for machine readable artifacts and things like that. And then it does allow for what they call plans of actions and milestones so if we look at the federal space, if you've done any compliance work for federal or Department of Defense systems, you'll typically have security controls or requirements and if you can't meet them, you create what's called a plan of action and milestones (POAM). This says we can't meet this control or this practice, but we will meet it at this date in the future, and here's some mitigating or compensating controls that we're putting in place in the interim. So it does allow some leeway for agencies to make use of software from suppliers that can't meet all these practices quite yet.

Dan Lorenc: Awesome. So you went through a bunch of stuff there, I jotted down some notes. You know the first thing that jumped out to me too when I started reading this was the exclusion around minor versions and major versions. As a software engineer myself, I know that a lot of those are just kind of made up. Do you think we're going to see a trend for agencies as a result of this to delay upgrading if it's that hard for major versions or for vendors to just kind of freeze that one forever and just start shipping everything as minor versions?

Chris Hughes: Yeah, I think unfortunately you're pointing out the human aspect because people are going to gamify this in a lot of different ways. If I can rename what a major or minor version update is from my perspective everything just becomes 0.2.3.4 and you just keep goin and avoid falling underneath this purview of these requirements. Its going to take some effort on industry to be transparent and upfront about minor and major version upgrades as well as the government to communicate with suppliers and say, “Hey this is a this is a major version, and we need to treat it appropriately.” People are most definitely going to try to gamify this I expect. Especially if they want to keep using software, if an agency wants to keep using a piece of software and they know that maybe the supplier can't meet these requirements quite yet or they aren't prepared, there's going to be a lot of shenanigans I suspect.

Dan Lorenc: One of the projects I used to work on is the Google Cloud SDK or the GCloud tool and originally the versioning was set up like 0.1 or 0.2 every time, every week, and we bumped the minor version. And then someone complained that it wasn't technically semper compatible because stuff would break in between each of these, so the team fixed the bug just by bumping the major version every single week. So every week, there's a new release of the GCloud tool and they just bumped a major version so it's on something, you know, like 250 now and next things will be at 251.0.0. I think we might see some of those practices just shift back again as a result of this.

I do like that SaaS is not excluded. SaaS doesn't have version numbers so I'm sure they had to think through what to do a little bit there and SaaS is in a lot of ways a bigger security risk just because you can't rely on a firewall or you can't rely on an air-gapped environment to protect a lot of this. I know you're pretty passionate about the increasing use of SaaS inside of the federal government. Do you think this is going to result in SaaS being even harder to sell to the government, even harder than it is today? Do you think it's going to result in an increased amount of trust placed in SaaS? How do you see that trend shifting?

Chris Hughes: I really hope the first part is inaccurate in terms of an even harder ability to get SaaS into the federal space. As I said, I served on the FedRAMP team in the past, and FedRAMP has been around for 10 years and there's about 300 approved service offerings in a market of tens of thousands. So while we want more compliance and security, it can also be incredibly restrictive to get access to innovative solutions and technologies for the government. But I do think, like you said, that SaaS is going to come underneath the purview of this and SaaS providers are going to have to start to attest to these things. That said, I think that depending on the supplier, the SaaS provider in this case, if they're a large, successful SaaS provider, they may have some success in meeting these requirements. Whereas if you're a brand new startup as a SaaS company, it can be more difficult much like FedRAMP would be more difficult for a new SaaS company. And then the same goes for legacy technology and suppliers. They may have a hard time meeting some of this versus maybe a SaaS supplier who has more modern development practices and environments and technologies in place. So I certainly hope it doesn't lead to more restrictive access to SaaS, but it could definitely happen, I think.

Dan Lorenc: Yeah, I think this is a case where it makes perfect sense. SaaS products for the government do need to be secured to a higher level than a lot of organizations are doing today so hopefully this helps. Another interesting call out. I love that there is an exclusion for open source software. It seems a little crazy that you even have to spell that out. Open source software is provided for free, as is, with no warranty. Almost every license for open source starts out that way when you find it in a GitHub repository, but a lot of folks gloss over that aspect of it and are used to being able to file a bug or used to being able to call a vendor if something they're using breaks. We've even seen some pretty high profile incidents lately where a vendor is blaming security incidents on bugs in the open source software that they're using. I like that this completely clarifies right up front that as a vendor you are responsible for the code you ship whether it's stuff you wrote or whether it's software you found on the internet on a package repository or even on a thumb drive on a sidewalk. What are your thoughts there? Do you think this is going to help the use of open source inside of the government?

Chris Hughes: I'm actually in alignment with you on this one. As I was writing the book, I was finding it was very common for people to consider open source software maintainers. They're suppliers and they're not using this as is. They're under no requirement to respond to you in a certain amount of time to address a vulnerability underneath an SLA. And there's a lot of parallels that interestingly enough when you look at automobiles for example there used to be a parallel where you would take a tire or take some kind of part and put it in the vehicle and you said well, “I didn't make this,  I'm using it from so and so.” It doesn't work like that. You made the product, you decided to integrate this thing into your product, therefore, you are now responsible for it. And like you said, you're taking this as is and you're responsible for these components that you put in your product so this is going to require organizations that are supplying software to the government to take some more rigor in governance around their open source software consumption and what they put in their products and be a bit more prudent around that. I think it's a good thing overall. I definitely hope it doesn't hurt the open source software ecosystem. I don't think it will. Software manufacturers and suppliers will start to take a pause and take a look at what they're integrating into their product knowing that they're not responsible for it.

Dan Lorenc: I think that's a good thing overall. You can contrast this actually to a sort of parallel of a regulation movement going on in Europe right now with the Cyber Resiliency Act, which has gotten a ton of discussion. The Cyber Resiliency Act fails to make that distinction between open source software that's consumed without a vendor relationship and software that's consumed with contracts, procurement, money changing hands, and an actual vendor relationship. Depending on your interpretation, a lot of folks, a lot of really important open source communities and foundations, are really worried that the CRA in Europe is going to place ridiculous amounts of undue burden on open source maintainers, which might even make their use and distribution inside of European markets untenable. Open source maintainers could be held liable for flaws and open source if it's used in Europe despite there not being any contracts set up or despite them not receiving any compensation back. It kind of flips around the whole open source movement in a way that is worrying a lot of folks in the community. Have you paid any attention to that one?

Chris: I have. I touched on that in the book, too, and I think it's really unfortunate. The wording and the way that played out is because as you point out they're not being compensated for this, they're often doing this on their own free will and free time and we have this vibrant, thriving ecosystem of open source software maintainers and contributors and people that participate in this ecosystem. Now ,if they're going to feel like they're threatened or going to be under some kind of requirement that puts them at some legal peril, they may be more reluctant to participate, and it's really concerning that it was worded the way it is. I think that it happens sometimes when we see policy folks making policy that don't have a background in technology. For example, we previously saw the NDAA phrase vulnerability-free software and it was like that. 

Dan Lorenc: I was just thinking the same thing.

Chris Hughes: I think it's really critical that we have technologists involved in the policy making process so they can point out things that maybe you just wouldn't think of from a legislative or policy perspective where you aren't as familiar as a hands-on technologist per se. So, I hope they rectify that, but it's definitely concerning for sure.

Dan Lorenc: Yeah makes a lot of sense. Then that final exclusion you kind of hinted out a little bit. Software developed by federal agencies is immune here. Can you shed any light on why you think that might be? Is it just because it doesn't go through the typical procurement process so it's kind of silly to apply it in the same way, or is there some other reason that was specifically called out?

Chris Hughes: You know I can't say for sure one way or the other, but it is concerning in the sense that we've had no shortage of federal agencies that have had many notable data breaches that have impacted millions of people in some cases. I think they need to be held to the same level of rigor and requirement as industry. Also, it's a little bit contradictory and it sends the wrong message like “you must do this but we don't have to do this.” I'm not a big fan of that. The federal systems do go through the risk management framework and they have an authority to operate and process, but typically it doesn't have that level of rigor around secure software development practices and open source software consumption, so I think that it needs federal systems and technology to come along for the ride and be under the same requirements as industry to some extent.

Dan Lorenc: Yeah, it could definitely be interpreted that way, but I guess you're saying the federal government hasn't found the magic secret to writing software without vulnerabilities.

Chris Hughes: Unfortunately not. Not yet.

Dan Lorenc: Awesome. All right, let's move on to talking about the form itself. We did a pretty good job explaining how we got here. How do you think this form will help improve making security-forward software accessible to the government? And obviously starting with the elephant in the room, which is the title itself. This is the self-attestation form, which implies people get to attest to it themselves, not a third-party auditor. Let's start there. I know you've drawn some historical conclusions. You pointed to some other recent attempts of something that starts this way and how it tends to work out in the government. What do you think the implication of starting with self attestation is?

Chris Hughes: Yeah, this is a really interesting topic and it's a challenging problem no matter which way you look at it. In the article that you're talking about, I point out parallels between NIST 800-171 and the Cybersecurity Maturity Model Certification (CMMC). In the defense industrial base, this came about because you know organizations that were doing work with the federal government had some pretty significant data breaches that expose robust research and development for weapon systems and things like that and they had been basically self-attesting that they were doing these security controls and practices. But, as it turned out, incidents proved that didn't seem to be the case and then also third-party analysis also pointed out that wasn't the case.

A little plug here for Jim Dempsey. He has an excellent blog series on Lawfare blog about third-party attestation versus a first-party or self-attestation and each of them has their unique benefits and drawbacks. A self attestation obviously is much faster and easier and scalable right. But on the flip side of that there's no real validation by a third party that these things that are being claimed to be done are actually done. The challenge there is if you introduce a third party like for example, FedRAMP has a third-party assessment organization involved, you only have 300 approved offerings in a market of tens of thousands. Imagine trying to have a third party go out and evaluate every software manufacturer selling software to the federal government and examine their software development practices and their artifacts. It's a very tricky paradigm to walk that self-attestation versus third-party attestation. There's drawbacks and concerns for self attestation. I think this is the right way to approach it initially otherwise it would be really cumbersome and bureaucratic and heavy on the industry to try to meet this with the introduction of yet another third-party attestation framework.

Dan Lorenc: A lot of the discussion I've seen around here seems to say we’re just starting self attestation to get something done. Well we can, you know, train up and get that cottage industry of SSDF auditors ready and aware of what they're supposed to be looking for so this way we don't have to stop the world while we wait for it, but it is a completely different model. You know a lot of the discussion I heard early on said “this is just going to be a check box everyone's going to fill it out so they don't lose their contracts.” But after reading the draft, it starts out with a pretty serious tone, right? This is expected to be signed by the CEO of a company selling software to the government and right under that they point out the exact law that you'd be violating if you lie on that form. And this is not a civil penalty right? This isn't you lose your contracts, this isn't you get fined if you screw up. Right there points to up to five years in prison if you lie on this form. I know Dr. Alan Friedman and a lot of other folks when we talk about SBOMs and accuracy always love to point out it is already a crime to lie to the federal government. Everyone should know that already, but it is pretty serious to start out this form by calling that out and reminding folks. As a CEO of a company, reading that I get worried, right? In a lot of these cases I'd rather have a third party come in and take on some of that liability and do those checks just so I'm not personally held accountable if a mistake is made. Do you think it's really going to be kind of glossed over and just checked off to keep things moving or do you think folks are going to take this seriously?

Chris Hughes: I think given the language that's there it will be taken pretty seriously. At least I hope it would be given the consequences or potential consequences for failing to do so.  So I think it's going to take some time to mature and give assessors some time to learn SSDF and how to assess for organizations complying with SSDF and things like that. But I think people are going to take it seriously, especially when the CEO is signing it or someone else is signing on behalf of the CEO. But as you said, you know, there's a lot of ambiguity in there in terms of if an incident happens you have to go in now and specifically prove that certain practice wasn't done. I'm not a legal scholar, but I imagine that this can get pretty muddy if you had to, like, go and force something like this in a complicated incident and in a large complex environment, for example. But I think given the language there, I think people will take it seriously. At least, I hope, if I was a CEO I certainly would be taking it pretty seriously.

Dan Lorenc: There's not a lot to joke about in there and there's a lot of controls that I think are going to surprise folks who are actually going to have to zoom into those requirements. I have some guesses of what's going to be challenging and most challenging just from the organizations I've talked to who have to comply with the form, but let's start with you. Is there anything in there that you think is really going to be tough for companies as they start reading this and start trying to get in shape to be able to attest?

Chris Hughes: There's some great ones there. I'm curious to hear which ones jump out to you, but for me there were some. One of the practices regarding the provenance of the components used. Most organizations don't even know what components they're using let alone where they came from. So I think that that's in particular going to be one that's really challenging for folks to comply with. How about you?

Dan Lorenc: Yeah, that one definitely jumped out to me for those same reasons. You know SBOMs have been a huge topic of conversation for a few years now and most organizations that I've talked to were struggling to even begin to find the information required to fill out an SBOM. An SBOM I like to think of was the list of ingredients, you know, the components that are actually going into your software or solution. Provenance is taking it one step further back and making sure that you got those components from where they were supposed to come from and that integrity was checked along the way to show things were built responsibly. So looking even farther upstream is kind of that provenance aspect of it and that's mentioned a couple times in the form’s requirements. I think that one is definitely going to be pretty challenging. The other aspect, which some of this stuff is already kind of applied to FedRAMP, is the vulnerability management requirements. Those are always challenging and always lead to stacks and stacks of POAMs for everyone I've talked to. Do you think that's going to be a challenge here? Do you think this is getting broader or is this just kind of a different take on the same aspect?

Chris Hughes: I think that's another good one to call out. As I was digging into the software supply chain topic and doing a lot of writing and reading on this, it really is a challenging problem. Historically, as we've done it as an industry there's a big difference between a vulnerability versus an exploitable vulnerability. And as you point out they have to have processes that show that reasonable steps were taken to address the security of third-party components. What exactly does “reasonable steps" mean? That's where you can kind of get into subjective or debatable aspects of this when it comes to vulnerability management. There's some pretty strong differences of opinion on what “reasonable” might be depending on if you're the supplier or you're the consumer. I think that's another great one to point out, too.

Dan Lorenc: A lot of this is removing ambiguity and this is one of the main goals of NIST. When they publish documents like this, it's an attempt to codify in a less ambiguous way what industry standard best practices are so that way we all have something to refer to. As folks start to think about filling out this form and complying with stuff from the SSDF, what would you advise organizations as they start down this journey?



Chris Hughes
: First, I would start to understand what software that I produce meets these requirements in terms of the date that it was made. You know, the type of development practices if it is SaaS, if it uses CI/CD for example, when was the latest date of a major change? As the things we talked about earlier, for example, understanding what software underneath my purview is  this form applicable to in these requirements and practices? And then from there, I would definitely get familiar with SSDF. There's been a lot of talk about this form only covering a subset of SSDF practices, but I would definitely recommend getting familiar with the broader SSDF to understand it at depth and then narrow in from there on the specific practices that the form calls out. The next thing I would do is to go and look at my software development practices, my environments, my configurations and my methodologies and see how do I measure up to these requirements and where do I have gaps? How do I start to address these gaps? And then using the term POAM, for example, maybe internally I start to do a little bit of a gap analysis and say, “hey we are doing these things well, we're not doing these practices and we have some gaps or deficiencies that we need to address.” And then working with your security and development teams to come up with a plan of how we're going to address these deficiencies and by when. Because you're going to need to provide that information to the government nonetheless. Not only internally do you need that to meet the requirements, but you need to provide to the government if you do have gaps and deficiencies for example.

Moving on from there, you're going to likely need to engage a third party in some cases if you don't have robust, secure software development expertise internally. Start to engage a third party who can help you come in and understand your practices as they are versus where they need to be and make recommendations. Finally, I would wrap it up by discussing this with my federal counterpart. “Hey, how are you all implementing these requirements? Are you guys prepared to receive these forms?” Having that conversation with them because a lot of agencies just as suppliers are going through this process and getting familiar with the requirements. 

Dan Lorenc: Let's wrap up the self-attestation form a little bit more. The request for comments is open right now right so this isn't final yet. Do you have any specific recommendations on how to improve this form itself or the overall process before it does get locked in?

Chris Hughes: Like you, I think that the practices there are good and definitely solid recommendations. I would recommend that we find a more efficient way to handle these forms. You know we’re seeing a lot of traction like we said from machine readable artifacts, SBOMs and VEX, and everything is moving infrastructure as code, policy as code, and then we have this PDF document. It's going to be really hard for organizations to just ingest these at scale and you can maybe ingest them and throw them in a folder. But if you want to make use of these to understand your software supply chain and the risk, then these forms need to be machine readable so you can pull the information out of it and present it in a better, scalable way. I really hope we see it mature, if not you know, immediately upon the release of the final version, but soon after hopefully some kind of machine readable format for this is available. I really think it's going to be hard for agencies to have hundreds or thousands of suppliers do this via PDF.

Dan Lorenc: Yeah, especially when it comes to the documentation around the controls and the practices because without having this in some kind of standard format that explains what everyone is doing, it's going to be hard to get beyond just that Boolean “did the vendor sign it or not” aspect. All these PDFs are just going to end up in an inbox somewhere that's hard to extract any data out of. Coming out of that next part, if everybody does just fill out these PDFs, attach tons of screenshots of, you know, the GitHub settings being configured correctly and different practices being in place, it is going to be hard to validate that any of this is accurate, if that even a goal at this point. 

Chris Hughes: I mean that's kind of the inference of self attestation is they're essentially just taking the word of the supplier in this case. No one is going behind to validate these things. I mean, they may look to see if you provided the artifact in the right format or some accompanying proof or supporting documentation. But no one is actually going out to validate these things in the suppliers’ environments, so there is that essentially inferred trust, which is ironic. You know, we hear a lot about zero trust and we're placing a lot of trust right now in our suppliers to do the right things and attest to doing so.

Dan Lorenc: Hopefully that's something we can move beyond as we get this rolled out as part of the industry.

All right, let's shift gears a little bit. Let's talk about the overall kind of practice that we're seeing in public and private collaboration. You started out by mentioning this at the beginning, but you recently joined CISA as a Cyber Fellow. What are some other ways that government agencies and the industry can collaborate to make sure that the practices that are being required and recommended here by the government are actually practical and provide a meaningful impact to everyone?

Chris Hughes: I think some of the best ways is just getting government representation involved in some of these communities that we see. Things like CNCF or OpenSSF and other organizations that have these robust communities of people that are passionate about software development and delivery. Getting government representatives involved in these communities, hearing from them firsthand on the challenges of complying with the requirements, or better ways of doing things or more scalable, automatable ways of doing things. And then obviously we can do a whole session on the government's challenges with workforce attraction and retainment, but that's a big piece of it is getting the right people into government so that we do see policies that are coherent and make sense and are drawn from practical experience when it comes to software development and cybersecurity. So I think just getting out there and getting involved in the community and being part of the community can go a long way for the government.

Dan Lorenc: Awesome. For folks that work in the community on that side, is there anything we can do to make it easier for folks in the government to participate?

Chris Hughes: I would say just having things be virtual helps sometimes. It's great to get in person, but the government doesn't always have the, believe it or not, doesn't have the robust budget or flexibility to travel to attend events that you might have as a civilian working in this industry. So having events be virtual, being open and collaborative, and being out there can go a long way. And then don't be afraid to reach out and engage with your government counterparts. They may not know who you are or what you're up to, so just getting out there and communicating to them. “Hey, we have this community of interest or a special interest group and we're working on this problem.” We would love to have a representative from the agency “XYZ”. Just crossing the aisle, so to say, I guess, from industry to government could help a lot.

Dan Lorenc: Oh man! So, no more Open Source Summits in Maui or Nice or anything like that.

All right, let's jump over to your book now a little bit. You've talked about the general theme of transparency a ton. We're seeing transparency used to help in cybersecurity both in the SBOM initiative just around knowing the ingredients, but also here publishing your actual practices. How would you frame the overall role of transparency in self-attestation, but then further as we get to third-party attestations?

Chris Hughes: Historically, there's been this information asymmetry between software suppliers and software consumers whether we're talking enterprise environments or citizens. We're seeing it like we're talking here about requirements that are emerging for the government to understand their software suppliers and have more transparency there, but there's also other efforts underway, like you talked about, the Cyber Resiliency Act.

We're seeing efforts on this for cybersecurity labeling, just to help consumers make more informed decisions around secure products. We've had this weird situation where just like open source, as a society, we've made more and more use of software so it's in everything we do. Every aspect of society is now dependent to some extent on software, yet we really haven't understood the transparency aspect of it. What's inside of it? Where did it come from? Who created it? What methods and practices were used to create it? I think that's why we're seeing transparency make such a big push and we think about things like zero trust and how imperative it is for securing the modern enterprise. Well it's the same thing for society, we need to have trust and that trust requires transparency.

Dan Lorenc: All right well thanks again for joining us today. I think we covered the entire SSDF, some of the history and I loved hearing your thoughts with your unique seat and point of view here as somebody that's played on both sides helping organizations meet government requirements and now gets to help out in this advisors program that you're a part of. Is there anything else you want to leave the audience with?

Chris Hughes: No, just happy to have the opportunity to come here and chat. I appreciate the great questions from the audience. Find me on LinkedIn, I'm happy to chat about these things and I'm always learning alongside everyone else. So, I'm just happy to be a part of the industry here. 

If you are a CSP, solutions provider or federal contractor looking for help implementing SSDF, understanding the self-attestation form or are working through FedRAMP authorization/renewal, get in touch with our team today. 

Related articles

Ready to lock down your supply chain?

Talk to our customer obsessed, community-driven team.