Background. Practical use of software code measures for faultiness estimation often requires setting thresholds on measures, to separate software modules that are likely to be faulty from modules that are likely to be non-faulty. Several threshold proposals exist in the literature. However, different proposals may recommend different threshold values for the same measure, so practitioners may be unsure which threshold value they should use. Objective. Our goal is to investigate whether it is possible to define a single threshold for a code measure, or at least a small range within which to select a threshold. Method. We carried out an empirical study based on two collections of datasets available in the SEACRAFT repository. For each dataset, we built all statistically significant univariate Binary Logistic Regression fault-proneness models, using a set of the most commonly used code measures as independent variables. We then derived thresholds for single software measures by setting an acceptability threshold to fault-proneness. We then checked whether the distribution of the thresholds obtained for the same measure is concentrated enough that it is possible to take a value of the code measure as a “universal” threshold for it. We repeated the same method with bivariate Binary Logistic Regression fault-proneness models. Results. The threshold distributions we obtained are quite dispersed, so it does not seem to be possible to define “universal” thresholds for the code measures we considered. Conclusions. According to the collected evidence, it appears hardly possible to effectively control software faultiness by recommending that the most commonly used code measures satisfy fixed thresholds.

Productivity of Software Enhancement Projects: an Empirical Study

Luigi Lavazza;Geng Liu;
2020-01-01

Abstract

Background. Practical use of software code measures for faultiness estimation often requires setting thresholds on measures, to separate software modules that are likely to be faulty from modules that are likely to be non-faulty. Several threshold proposals exist in the literature. However, different proposals may recommend different threshold values for the same measure, so practitioners may be unsure which threshold value they should use. Objective. Our goal is to investigate whether it is possible to define a single threshold for a code measure, or at least a small range within which to select a threshold. Method. We carried out an empirical study based on two collections of datasets available in the SEACRAFT repository. For each dataset, we built all statistically significant univariate Binary Logistic Regression fault-proneness models, using a set of the most commonly used code measures as independent variables. We then derived thresholds for single software measures by setting an acceptability threshold to fault-proneness. We then checked whether the distribution of the thresholds obtained for the same measure is concentrated enough that it is possible to take a value of the code measure as a “universal” threshold for it. We repeated the same method with bivariate Binary Logistic Regression fault-proneness models. Results. The threshold distributions we obtained are quite dispersed, so it does not seem to be possible to define “universal” thresholds for the code measures we considered. Conclusions. According to the collected evidence, it appears hardly possible to effectively control software faultiness by recommending that the most commonly used code measures satisfy fixed thresholds.
2020
Joint Proceedings of the 30th International Workshop on Software Measurement and the 15th International Conference on Software Process and Product Measurement (IWSM Mensura 2020)
IWSM-Mensura 2020 International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement 2020
Virtual event
29-30 ottobre 2020
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11383/2101536
 Attenzione

L'Ateneo sottopone a validazione solo i file PDF allegati

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact