Modification of the Open Source Maturity Model

FOSSBazaar is no longer being updated. The information on this site is preserved for your convenience but may be out of date. Please visit Linux Foundation's Open Compliance Program for current information and activities.

Hewlett-Packard's picture

The Open Source Maturity Model is geared toward comparing different open source software packages against each other for deployment into an IT organization. As such, it is too general in some areas and it has a different goal from us. We are not interested in comparing various competing possibilities for a pilot selection. We are interested in generating a ranking for a particular open source product that will help us minimize risk in providing third tier support for that product. The Documentation and Support sections will be kept close to as is to help gauge maturity of the product. The Product section however, is modified to both include specific questions and focus the assessment on determining the maturing and, more importantly, the viability of the assessed product. The focus is on how active is the product, how receptive are the developers, how active is the product's community, and how accepted is the product by supported distributions.

A large deviation from the Open Source Maturity Model to the Open Source Supportability Assessment is the scoring system. Each question or category is scored with a value from 1 to 5 with the higher score of 5 declaring either better position, lower risk, stronger standing, or more extensive feature. The score is the standard "Strongly Disagree" to "Strongly Agree" setup. Scoring is of course subjective, but don't let that slow down the assessment. There is no passing or failing grade from the score. All the score is trying to accomplish is to give us a gauge of how robust of a support exists and how large ones risk is. Granted, a minimum score will probably mean there are serious support issues, and a maximum score will probably mean that there are compelling support options available. But, just because a product gets a low score, does not automatically mean that support is a showstopper issue. In the header of the Open Source Supportability Assessment there is a place to record the name of the product assessed, a general synopsis of the assessment, the score, and a place for a "next-best" alternative product with discussion. The next-best alternative product is a software product, usually open source, that is very close in functionality, maturity and rough assessment value to the currently assessed product. It can also be better than the current product, a fact which should be discussed in the comment section. Commercial products can also be named in this field, if their use is warranted.