By Gordon Haff, Technology Evangelist, Red Hat
One aspect of today’s software development process that’s contributed greatly to the overall pace of innovation and change is open source. And one of the key reasons that open source has proven such an effective approach to writing software is that it makes it possible to reuse, remix, and build upon code from a wide range of different sources.
GitHub claims 35 million repositories. Docker Hub claims “100,000+ free apps, public and private registries.” A wide range of virtual machine images, many of them based wholly or in part on both commercial and community-supported open source software stacks, are likewise available on cloud providers such as Amazon Web Services. Such freewheeling software bazaars have greatly reduced the friction associated with developing new applications and services. The rate at which we see software released across a wide range of platforms today would simply not be possible if rebuilding most things from scratch were the typical approach.
Furthermore, the open source development process means that, when vulnerabilities are found, the entire community of developers and vendors can work together to update code, security advisories, and documentation in a coordinated manner.
Yet, a degree of caveat emptor is also called for when sourcing software from such varied and loosely-organized environments. The issue is typically not so much that software which is malicious by design gets uploaded to these public repositories (although that can happen). Rather it’s that some, even the majority, of this software isn’t systematically updated to patch vulnerabilities, was never developed using solid security methodologies in the first place, or isn’t adequately supported.
Therefore, as with any process building upstream components into a product, it’s important to put in place procedures to validate the source of the software, perform appropriate due diligence on the code before it’s put into production, and apply processes that can detect and mitigate future problems. In other words, you must secure the software supply chain.
One of the most important aspects of a secure supply chain is understanding and validating the source of the software. For example, an IT organization may want to use a whitelist for critical infrastructure software so that it’s only sourced from specific repositories and vendors and is delivered in a cryptographically secure manner. When assembling unsupported components, it becomes that much more important for IT organizations to perform their own evaluation of upstream code provenance and licensing–as well as whether it’s been updated against known vulnerabilities.
The importance of rigorous review will vary based on how a particular piece of software will be used of course. Mission-critical production systems have different standards from developer sandboxes. However, systems are increasingly exposed to public networks and software development, test, and deployment is becoming more integrated as part of a single continuous integration/continuous delivery pipeline. As a result, it can be difficult to reliably segregate software into “the stuff that’s safely behind a firewall” and “the stuff that needs to be rigorously secured from improper external access.”
Software in a trusted repository should be digitally signed and distributed through secure channels. Vulnerability and errata information should generally also be provided in machine-readable form so that it can be consumed and acted upon at scale–such as through the use of a Security Content Automation Protocol (SCAP) scanner. These types of assurances, which have been in place at commercial open source software vendors for some time, are now also being extended to new technologies such as containers. For example, container certification can let you know that components come from a trusted source, platform packages have not been tampered with, the container image is free of known vulnerabilities in the platform components or layers, and the complete stack is commercially supported.
Securing the software supply chain doesn’t end with downloading the code however. Operating secure environments and processes also means–either directly or through your vendor–tracking and having ongoing expertise in upstream projects, implementing reproducible build and testing systems, and responding quickly and effectively to vulnerabilities. As with a fire or a car accident, minutes count with security incident response. Roles, responsibilities, and processes must be well established ahead of time. Technical expertise matters, but so does having clear communication plans to share information with those potentially affected by the incident and with broader constituencies such as the press.
During the Shellshock and Heartbleed security incidents, for example, users with effective plans and support relationships in place had the knowledge, patches, and applications needed to verify exposure to these bugs available to them within hours, which allowed them to quickly and effectively mitigate any potential issues and avoid business disruption.
Finally, it’s important to be have insight into and control over software throughout its lifecycle. For example, real-time monitoring and enforcement of policies can not only address performance and reliability issues before the problems become serious but they can also detect and mitigate potential compliance issues. The need for security policies and plans doesn’t even end when an application is retired. The ownership and policies pertaining to the data associated with an application need to be well understood so that the proper steps can be taken to comply with retention requirements and the sanitization of personally identifiable information (PII).
The sourcing options in today’s world are immensely varied and rich–and nowhere is this truer than with open source software. But effectively and safely using that variety doesn’t come entirely for free. More than ever, it requires paying attention to and establishing control over your supply chain.