Jay Lyman of The 451 Group posted this note specific to the security of open source software.
Content that I contributed to the post . . .
1. There is a direct correlation between reported vulnerabilities and usage. The most used applications have the most consistent and accurate issue reporting, as can be seen by the information reported to the NVD. This is a two part situation. Applications in more “common” use will receive more routine attention. Some applications are on the review list of internal and external testers. The more on the radar an application is, the more likely it is to have regular issues reported. A distinction with reporting is that popular applications do more often have commercial funding, and pay people to test for and find issues. The distinction clarified is that a large user base and popularity leads to something important - corroborating reports of the same issue. If an internal engineer reports an issue, it is valuable, even on a FOSS project. Popularity means that reports may also come from external sources. Those reports can be correlated against the internal ones, thereby providing an objective review of what is going on.
2. Patches are released more consistently by well financed operations. Financial support leads to a more consistent patch release schedule. Good engineers get paid to do what they do. Engineers working on Linux have no had to do so for free, yet Linux is held up as an example of how to do it right. In practice, Linux has among the most vulnerabilities reported against any OS, and when combined with issues against distributions, Linux has the most issues reported against it. The important thing is, when issues are reported, a well funded FOSS project can put engineers against the suspected defect, test, document, and resolve. If a patch is required, a well funded operation delivers the patches faster and more consistently.
3. Vulnerabilities are sometimes an inverse measure to security. Risk has an indirect correlation to issues reported. The problem is that the issues are a communications mechanism. If we point to reported vulnerabilities as a problem, companies will be secretive and mislead information reporting. The process that we want is to have lots of issues reported, and lots of very timely responses. The “risk” is only the component of time from when an issue is reported to the time when a maintainer or vendor responds or posts a patch. The risk is that time during which there is no clear path to safety. THerefore, the biggest risk in software security is using an application which has NO reported issues. This means that nobody is looking, or looking hard enough, or you are using an application with such little user population that nobody will see anything or report it. This risk increases as the complexity of the application increases. I would not be surprised if an icon editor had no issues reported for two years, but would highly suspect the information if a major database had no issues for a time period. Remember, reported issues are just information. Having no response, that represents a risk.
This work is licensed under Creative Common By SA 3.0