It’s 2015 and it’s pretty clear the Open Source way has largely won as a development model for large and small projects. But when it comes to security we still practice a less-than-open model of embargoes with minimal or, in some cases, no community involvement. With the transition to more open development tools, such as Gitorious and GitHub, it is now time for the security process to change and become more open.
The problem
In general the argument for embargoes simply consists of “we’ll fix these issues in private and release the update in a coordinated fashion in order to minimize the time an attacker knows about the issue and an update is not available”. So why not reduce the risk of security issues being exploited by attackers and simply embargo all security issues, fix them privately and coordinate vendor updates to limit the time attackers know about this issue? Well for one thing we can’t be certain that an embargoed issue is known only to the people who found and reported it, and the people working on it. By definition if one person is able to find the flaw, then a second person can find it. This exact scenario happened with the high profile OpenSSL “HeartBleed” issue: initially an engineer at Codenomicon found it, and then it was independently found by the Google security team.
Additionally, the problem is a mismatch between how Open Source software is built and how security flaws are handled in Open Source software. Open Source development, testing, QA and distribution of the software mostly happens in public now. Most Open Source organizations have public source code trees that anyone can view, and in many cases submit change requests to. As well many projects have grown in size, not only code wise but developer wise, and some now involve hundreds or even thousands of developers (OpenStack, the Linux Kernel, etc.). It is clear that the Open Source way has scaled and works well with these projects, however the old fashioned way of handling security flaws in secret has not scaled as well.
Process and tools
The Open Source development method generally looks something like this: project has a public source code repository that anyone can copy from, and specific people can commit to. Project may have some continuous integration platform like Jenkins, and then QE testing to make sure everything still works. Fewer and fewer projects have a private or hidden repository, for one reason because there is little benefit to doing so, and many providers do not allow private repositories on the free plans that they offer. This also applies to the continuous integration and testing environments used by many projects (especially when using free services). So actually handling a security issue in secret without exposing it means that many projects cannot use their existing infrastructure and processes, but must instead email patches around privately and do builds/testing on their own workstations.
Code expertise
But let’s assume that an embargoed issue has been discovered and reported to upstream and only the original reporter and upstream know. Now we have to hope that between the reporter and upstream there is enough time and expertise to properly research the issue and fully understand it. The researcher may have found the issue through fuzzing and may only have a test case that causes a crash but no idea even what section of code is affected or how. Alternatively the researcher may know what code is affected but may not fully understand the code or how it is related to other sections of the program, in which case the issue may be more severe than they think, or perhaps it may not be as severe, or even be exploitable at all. The upstream project may also not have the time or resources to understand the code properly, as many of these people are volunteers, and projects have turn over, so the person who originally wrote the code may be long gone. In this case making the issue public means that additional people, such as vendor security teams, can also participate in looking at the issue and helping to fully understand it.
Patch creation with an embargoed issue means only the researcher and upstream participating. The end result of this is often patches that are incomplete and do not fully address the issue. This happened with the Bash Shellshock issue (CVE-2014-6271) where the initial patch, and even subsequent patches, were incomplete resulting in several more CVEs (CVE-2014-6277, CVE-2014-6278, CVE-2014-7169). For a somewhat complete listing of such examples simply search the CVE database for “because of an incomplete fix for”.
So assuming we now have a fully understood issue and a correct patch we actually need to patch the software and run it through QA before release. If the issue is embargoed this means you have to do so in secret. However, many Open Source projects use public or open infrastructure, or services which do not support selective privacy (a project is either public or private). Thus for many projects this means that the patching and QA cycle must happen outside of their normal infrastructure. For some projects this may not be possible; if they have a large Jenkins infrastructure replicating it privately can be very difficult.
And finally we have a fully patched and tested source code release, we may still need to coordinate a release with other vendors which has significant overhead and time constraint for all concerned. Obviously if the issue is public the entire effort spent on privately coordinating the issue is not needed and that effort can be spent on other things such as ensuring the patches are correct and that they address the flaw properly.
The Fix
The answer to the embargo question is surprisingly simple: we only embargo the issues that really matter and even then we use embargoes sparingly. Bringing in additional security experts, who would not normally be aware due to the embargo, rather than just the original researcher and the upstream project, increases the chances of the issue being properly understood and patched the first time around. Making the issue public will get more eyes on it. And finally, for the majority of lower severity issues (e.g. most XSS, temporary file vulnerabilities) attackers have little to no interest in them, so the cost of embargoes really makes no sense here. In short: why not treat most security bugs like normal bugs and get them fixed quickly and properly the first time around?