2008: The Risk Report
The Research Group has developed tools that objectivelytrack and report on operational risk associated with software applications,operating systems and hardware.I have seen a number of "Most Risky" lists that seem to besubjective and crafted by nothing more than a few Google searches and apopularity contest.
Top 25 FOSS
Why 25? It is easier to show that software risk is time sensitive, objective and accurate with a larger list. My current list as of this week tracks 14813 products from almost as many vendors.The risk metrics are collected automatically and sorted. Members of the team correct discrepancies introduced by bad data, and then the results are generated using statistical queries on MySQL. http://nvd.nist.gov is the official datasource for the risk information.The ordered output is generated by an algorithm that scores a weighted value for each CVE based on the risk and age of that CVE, and then totals all the weighted CVEs across the life of a product. Such total scores are then compared one to another. In this way, an application that has been out for a very short time could make the top of the list if it had more security issues of high criticality over its release life than most applications. See the full report at http://gpl3.blogspot.com/2009/01/2008-risk-report.html
Top 25 FOSS
- Linux Kernel
- Mozilla Firefox
- Mozilla Seamonkey
- Mozilla Thunderbird
- Gentoo Linux
- Debian Linux
- Ubuntu Linux
- Mozilla Suite
- Trustix Secure Linux
- web-app.org WebAPP
What does this mean?
The lists review vulnerabilities reported historically to the National Vulnerability Databaseand sorts them. The reported vulnerabilities are weighted by their individual risk, then weighted by their historic age, where newer issues are more relevant than older issues, all else being the same. The"percentage" is a relative metric, where the "most vulnerable" application for a report is scored 100%. All other software is scored relative to the 100%.
Is this software bad?
No. What you see is that open source and proprietary software both have issues. The risk seems to directly correlate with the complexity of the software type. Operating systems are inherently very complex, and always are very high on reported vulnerabilities. Notice that regardless of the license type, the level of relative risk is comparable by software type. What this seems to indicate is that complex software takes diligent effort to write,debug, and manage in an operational environment, regardless of the licensing that the software is distributed under. My team has tracked the resolution intervals relative to reported issues. What we saw as we started monitoring the publicly available data is that a well used and available forum drives awareness to issues, and indirectly facilitates rapid resolution for complex software, regardless of licensing. So which application is the worst? Software risk is a way of highlighting the management requirements imposed by software within an environment. Complex software may impose a greater management load than simple software. Tracking risk and vulnerabilities is a way that security and infrastructure managers can predict and deploy people and processes to actively manage the issues associated with certain types of software.
Risky software is not bad?
Tires wear out over time, asphault roads need to be repaved frequently, roofs need to be replaced, plumbing leaks once in a while. The requirement to maintain systems and to expect systems to require greater maintenance based on what these systems do is normal. Expecting software to be without issues is unreasonable and naive.
Risk is good?
Of course it is. If risk management is a process of ongoing maintenance, a healthy and interactive commnity participating in the discovery and reporting of risk issues improves the software. Failing to manage complex software, regardless of free or proprietary licensing, that is risky.
What do I do?
Complex software needs to have strong support and an active community. It is a greater risk to use a complex application that has no reported vulnerabilities than one that has many issues. Use the best software for the task. It may be risky, based on discovered issues. Understand that if your management process includes testing, validation of reported issues, and application of patches as available, your risk is incredibly low. If you can update your running software within 30 days of patch releases, your exposure is minimal, and you have an objective process to use complex and quality software within your environment.
Define Policies and Enforce Them
Software exists to facilitate the identification of software and services. Know what you are using, understand what the average work effort is to manage the installed software in your environment, and then set policies to monitor the active management of such software.
Sotware is asked to do many things. Complex software is asked to do many complex and critical things. More quality software is created by less people, in less time and with less resources. Is the software worse than it ever was? No. The power of the community works to expose these issues and drive resolutions quickly. Accept the fact that software is evolutionary, put a management process in place to take advantage of the input from the community (testing, validation,qualitative review, network and security policy, education), apply qualified patches. Clear information about software issues reduces operational risk if such information is put to use. The applications for which no information exists pose the greatest threat to security. Without community oversight and review, unknown applications have the opportunity to mistakenly slide under the radar while being large potential threats.
The riskiest software is the software that you don't know about.