Trustworthiness is one of the main aspects that contribute to the adoption/rejection of a software product. This is actually true for any product in general, but it is especially true for Open Source Software (OSS), whose trustworthiness is sometimes still regarded as not as guaranteed as that of closed source products. Only recently, several industrial software organizations have started investigating the potential of OSS products as users or even producers. As they are now getting more and more involved in the OSS world, these software organizations are clearly interested in ways to assess the trustworthiness of OSS products, so as to choose OSS products that are adequate for their goals and needs. Trustworthiness is a major issue when people and organizations are faced with the selection and the adoption of new software. Although some ad-hoc methods have been proposed, there is not yet general agreement about which software characteristics contribute to trustworthiness. Such methods –like the OpenBQR [30] and other similar approaches [58][59]– assess the trustworthiness of a software product by means of a weighted sum of specific quality evaluations. None of the existing methods based on weighted sums has been widely adopted. In fact, these methods are limited in that they typically leave the user with two hard problems, which are common to models built by means of weighted sums: identify the factors that should be taken into account, and assign to each of these factors the “correct” weight to adequately quantify its relative importance. Therefore, this work focuses on defining an adequate notion of trustworthiness of Open Source products and artifacts and identifying a number of factors that influence it to help and guide both developers and users when deciding whether a given program (or library or other piece of software) is “good enough” and can be trusted in order to be used in an industrial or professional context. The result of this work is a set of estimation models for the perceived trustworthiness of OSS. This work has been carried out in the context of the IST project QualiPSo (http://www.qualipso.eu/), funded by the EU in the 6th FP (IST-034763). The first step focuses on defining an adequate notion of trustworthiness of software products and artifacts and identifying a number of factors that influence it. The definition of the trustworthiness factors is driven by specific business goals for each organization. So, we carried out a survey to elicit these goals and factors directly from industrial players, trying to derive the factors from the real user needs instead of deriving them from our own personal beliefs and/or only by reading the available literature. The questions in the questionnaire were mainly classified in three different categories: 1) Organization, project, and role. 2) Actual problems, actual trustworthiness evaluation processes, and factors. 3) Wishes. These questions are needed to understand what information should be available but is not, and what indicators should be provided for an OSS product to help its adoption. To test the applicability of the trustworthiness factors identified by means of the questionnaires, we selected a set of OSS projects, widely adopted and generally considered trustable, to be used as references. Afterwards, a first quick analysis was carried out, to check which factors were readily available on each project’s web site. The idea was to emulate the search for information carried out by a potential user, who browses the project’s web sites, but is not willing to spend too much effort and time in carrying out a complete analysis. By analyzing the results of this investigation, we discovered that most of the trustworthiness factors are not generally available with information that is enough to make an objective assessment, although some factors have been ranked as very important by the respondents of our survey. To fill this gap, we defined a set of different proxy-measures to use whenever a factor cannot be directly assessed on the basis of readily available information. Moreover, some factors are not measurable if developers do not explicitly provide essential information. For instance, this happens for all factors that refer to countable data (e.g., the number of downloads cannot be evaluated in a reliable way if the development community does not publish it). Then, by taking into account the trustworthiness factors and the experience gained through the project analysis, we defined a Goal/Question/Metric (GQM[29]) model for trustworthiness, to identify the qualities and metrics that determine the perception of trustworthiness by users. In order to measure the metrics identified in the GQM model, we identified a set of tools. When possible, tools were obtained by adapting, extending, and integrating existing tools. Considering that most of metrics were not available via the selected tools, we developed MacXim, a static code analysis tool. The selected tools integrate a number of OSS tools that support the creation of a measurement plan, starting from the main actors’ and stakeholders’ objectives and goals (developer community, user community, business needs, specific users, etc.), down to the specific static and dynamic metrics that will need to be collected to fulfill the goals. To validate the GQM model and build quantitative models of perceived trustworthiness and reliability, we collected both subjective evaluations and objective measures on a sample of 22 Java and 22 C/C++ OSS products. Objective measures were collected by means of MacXim and the other identified tools while subjective evaluations were collected by means of more than 500 questionnaires. Specifically, the subjective evaluations concerned how users evaluate the trustworthiness, reliability and other qualities of OSS; objective measures concerned software attributes like size, complexity, modularity, and cohesion. Finally, we correlated the objective code measures to users’ and developers’ evaluations of OSS products. The result is a set of quantitative models that account for the dependence of the perceivable qualities of OSS on objectively observable qualities of the code. Unlike the models based on weighted sums usually available in the literature, we have obtained estimation models [87], so the relevant factors and their specific weights are identified via statistical analysis, and not in a somewhat more subjective way, as usually happens. Qualitatively, our results may not be totally surprising. For instance, it may be generally expected that bigger and more complex products are less trustworthy than smaller and simpler products; likewise, it is expected that well modularized products are more reliable. For instance, our analyses indicate that the OSS products are most likely to be trustworthy if: • Their size is not greater than 100,000 effective LOC; • The number of java packages is lower than 228. These models derived in our work can be used by end-users and developers that would like to evaluate the level of trustworthiness and reliability of existing OSS products and components they would like to use or reuse, based on measurable OSS code characteristics. These models can also be used by the developers of OSS products themselves, when setting code quality targets based on the level of trustworthiness and reliability they want to achieve. So, the information obtained via our models can be used as an additional piece of information that can be used when making informed decisions. Thus, unlike several discussions that are based on –sometimes interested– opinions about the quality of OSS, this study aims at deriving statistically significant models that are based on repeatable measures and user evaluations provided by a reasonably large sample of OSS users. The detailed results are reported in the next sections as follows: •Chapter 1 reports the introduction to this work •Chapter 2 reports the related literature review •Chapter 3 reports the identified trustworthiness factors •Chapter 4 describe how we built the trustworthiness model •Chapter 5 shows the tools we developed for this activity •Chapter 6 reports on the experimentation phase •Chapter 7 shows the results of the experimentation •Chapter 8 draws conclusions and highlights future works •Chapter 9 lists the publication made during the PhD

Towards a trustworthness model for Open Source software / Taibi, Davide. - (2011).

Towards a trustworthness model for Open Source software.

Taibi, Davide
2011-01-01

Abstract

Trustworthiness is one of the main aspects that contribute to the adoption/rejection of a software product. This is actually true for any product in general, but it is especially true for Open Source Software (OSS), whose trustworthiness is sometimes still regarded as not as guaranteed as that of closed source products. Only recently, several industrial software organizations have started investigating the potential of OSS products as users or even producers. As they are now getting more and more involved in the OSS world, these software organizations are clearly interested in ways to assess the trustworthiness of OSS products, so as to choose OSS products that are adequate for their goals and needs. Trustworthiness is a major issue when people and organizations are faced with the selection and the adoption of new software. Although some ad-hoc methods have been proposed, there is not yet general agreement about which software characteristics contribute to trustworthiness. Such methods –like the OpenBQR [30] and other similar approaches [58][59]– assess the trustworthiness of a software product by means of a weighted sum of specific quality evaluations. None of the existing methods based on weighted sums has been widely adopted. In fact, these methods are limited in that they typically leave the user with two hard problems, which are common to models built by means of weighted sums: identify the factors that should be taken into account, and assign to each of these factors the “correct” weight to adequately quantify its relative importance. Therefore, this work focuses on defining an adequate notion of trustworthiness of Open Source products and artifacts and identifying a number of factors that influence it to help and guide both developers and users when deciding whether a given program (or library or other piece of software) is “good enough” and can be trusted in order to be used in an industrial or professional context. The result of this work is a set of estimation models for the perceived trustworthiness of OSS. This work has been carried out in the context of the IST project QualiPSo (http://www.qualipso.eu/), funded by the EU in the 6th FP (IST-034763). The first step focuses on defining an adequate notion of trustworthiness of software products and artifacts and identifying a number of factors that influence it. The definition of the trustworthiness factors is driven by specific business goals for each organization. So, we carried out a survey to elicit these goals and factors directly from industrial players, trying to derive the factors from the real user needs instead of deriving them from our own personal beliefs and/or only by reading the available literature. The questions in the questionnaire were mainly classified in three different categories: 1) Organization, project, and role. 2) Actual problems, actual trustworthiness evaluation processes, and factors. 3) Wishes. These questions are needed to understand what information should be available but is not, and what indicators should be provided for an OSS product to help its adoption. To test the applicability of the trustworthiness factors identified by means of the questionnaires, we selected a set of OSS projects, widely adopted and generally considered trustable, to be used as references. Afterwards, a first quick analysis was carried out, to check which factors were readily available on each project’s web site. The idea was to emulate the search for information carried out by a potential user, who browses the project’s web sites, but is not willing to spend too much effort and time in carrying out a complete analysis. By analyzing the results of this investigation, we discovered that most of the trustworthiness factors are not generally available with information that is enough to make an objective assessment, although some factors have been ranked as very important by the respondents of our survey. To fill this gap, we defined a set of different proxy-measures to use whenever a factor cannot be directly assessed on the basis of readily available information. Moreover, some factors are not measurable if developers do not explicitly provide essential information. For instance, this happens for all factors that refer to countable data (e.g., the number of downloads cannot be evaluated in a reliable way if the development community does not publish it). Then, by taking into account the trustworthiness factors and the experience gained through the project analysis, we defined a Goal/Question/Metric (GQM[29]) model for trustworthiness, to identify the qualities and metrics that determine the perception of trustworthiness by users. In order to measure the metrics identified in the GQM model, we identified a set of tools. When possible, tools were obtained by adapting, extending, and integrating existing tools. Considering that most of metrics were not available via the selected tools, we developed MacXim, a static code analysis tool. The selected tools integrate a number of OSS tools that support the creation of a measurement plan, starting from the main actors’ and stakeholders’ objectives and goals (developer community, user community, business needs, specific users, etc.), down to the specific static and dynamic metrics that will need to be collected to fulfill the goals. To validate the GQM model and build quantitative models of perceived trustworthiness and reliability, we collected both subjective evaluations and objective measures on a sample of 22 Java and 22 C/C++ OSS products. Objective measures were collected by means of MacXim and the other identified tools while subjective evaluations were collected by means of more than 500 questionnaires. Specifically, the subjective evaluations concerned how users evaluate the trustworthiness, reliability and other qualities of OSS; objective measures concerned software attributes like size, complexity, modularity, and cohesion. Finally, we correlated the objective code measures to users’ and developers’ evaluations of OSS products. The result is a set of quantitative models that account for the dependence of the perceivable qualities of OSS on objectively observable qualities of the code. Unlike the models based on weighted sums usually available in the literature, we have obtained estimation models [87], so the relevant factors and their specific weights are identified via statistical analysis, and not in a somewhat more subjective way, as usually happens. Qualitatively, our results may not be totally surprising. For instance, it may be generally expected that bigger and more complex products are less trustworthy than smaller and simpler products; likewise, it is expected that well modularized products are more reliable. For instance, our analyses indicate that the OSS products are most likely to be trustworthy if: • Their size is not greater than 100,000 effective LOC; • The number of java packages is lower than 228. These models derived in our work can be used by end-users and developers that would like to evaluate the level of trustworthiness and reliability of existing OSS products and components they would like to use or reuse, based on measurable OSS code characteristics. These models can also be used by the developers of OSS products themselves, when setting code quality targets based on the level of trustworthiness and reliability they want to achieve. So, the information obtained via our models can be used as an additional piece of information that can be used when making informed decisions. Thus, unlike several discussions that are based on –sometimes interested– opinions about the quality of OSS, this study aims at deriving statistically significant models that are based on repeatable measures and user evaluations provided by a reasonably large sample of OSS users. The detailed results are reported in the next sections as follows: •Chapter 1 reports the introduction to this work •Chapter 2 reports the related literature review •Chapter 3 reports the identified trustworthiness factors •Chapter 4 describe how we built the trustworthiness model •Chapter 5 shows the tools we developed for this activity •Chapter 6 reports on the experimentation phase •Chapter 7 shows the results of the experimentation •Chapter 8 draws conclusions and highlights future works •Chapter 9 lists the publication made during the PhD
2011
open source, software quality
Towards a trustworthness model for Open Source software / Taibi, Davide. - (2011).
File in questo prodotto:
File Dimensione Formato  
Ph_thesis_taibi_completa.pdf

accesso aperto

Descrizione: testo completo tesi
Tipologia: Tesi di dottorato
Licenza: Non specificato
Dimensione 10.17 MB
Formato Adobe PDF
10.17 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11383/2090198
 Attenzione

L'Ateneo sottopone a validazione solo i file PDF allegati

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact