Over the last 25+ years, software estimation research has been searching for the best model for estimating variables of interest (e.g., cost, defects, and fault proneness). This research effort has not lead to a common agreement. One problem is that, they have been using accuracy as the basis for selection and comparison. But accuracy is not invariant; it depends on the test sample, the error measure, and the chosen error statistics (e.g., MMRE, PRED, Mean and Standard Deviation of error samples). Ideally, we would like an invariant criterion. In this paper, we show that uncertainty can be used as an invariant criterion to figure out which estimation model should be preferred over others. The majority of this work is empirically based, applying Bayesian prediction intervals to some COCOMO model variations with respect to a publicly available cost estimation data set coming from the PROMISE repository.

Sarcia', S., Basili, V., Cantone, G. (2009). Using uncertainty as a model selection and comparison criterion. In Predictor Models in Software Engineering (PROMISE 2009) (pp.153-161). New York -- USA : ACM [10.1145/1540438.1540464].

Using uncertainty as a model selection and comparison criterion

CANTONE, GIOVANNI
2009-01-01

Abstract

Over the last 25+ years, software estimation research has been searching for the best model for estimating variables of interest (e.g., cost, defects, and fault proneness). This research effort has not lead to a common agreement. One problem is that, they have been using accuracy as the basis for selection and comparison. But accuracy is not invariant; it depends on the test sample, the error measure, and the chosen error statistics (e.g., MMRE, PRED, Mean and Standard Deviation of error samples). Ideally, we would like an invariant criterion. In this paper, we show that uncertainty can be used as an invariant criterion to figure out which estimation model should be preferred over others. The majority of this work is empirically based, applying Bayesian prediction intervals to some COCOMO model variations with respect to a publicly available cost estimation data set coming from the PROMISE repository.
PROMISE '09 International Conference on Predictor Models in Software Engineering
Vancouver, Canada
2009
5
ICSE colocated event
Rilevanza internazionale
contributo
18-mag-2009
2009
Settore ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI
English
Software estimation models; accuracy; error measure; error statistics; uncertainty;
http://dl.acm.org/citation.cfm?id=1540464&dl=ACM&coll=DL&CFID=455575457&CFTOKEN=49253491
Intervento a convegno
Sarcia', S., Basili, V., Cantone, G. (2009). Using uncertainty as a model selection and comparison criterion. In Predictor Models in Software Engineering (PROMISE 2009) (pp.153-161). New York -- USA : ACM [10.1145/1540438.1540464].
Sarcia', S; Basili, V; Cantone, G
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/98868
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact