Blogs (9) >>
Sun 30 October - Fri 4 November 2016 Amsterdam, Netherlands
Sun 30 Oct 2016 14:40 - 15:10 at Berlin - Session 3

Defect-prediction techniques can enhance the quality assurance activities for software systems. For instance, they can be used to predict bugs in source files or functions. In the context of a software product line, such techniques could ideally be used for predicting defects in features or combinations of features, which would allow developers to focus quality assurance on the error-prone ones. In this preliminary case study, we investigate how defect prediction models can be used to identify defective features using machine-learning techniques. We adapt process metrics and evaluate and compare three classifiers using an open-source product line. Our results show that the technique can be effective. Our best scenario achieves an accuracy of 73 % for accurately predicting features as defective or clean using a Naive Bayes classifier. Based on the results we discuss directions for future work.

Sun 30 Oct
Times are displayed in time zone: (GMT+02:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

13:30 - 15:10: FOSD - Session 3 at Berlin
fosd201613:30 - 14:00
Spencer HubbardOregon State University, USA, Eric WalkingshawOregon State University, USA
fosd201614:05 - 14:35
Sofia AnanievaFZI Research Center for Information Technology, Matthias KowalTU Braunschweig, Germany, Thomas ThümTU Braunschweig, Germany, Ina SchaeferTU Braunschweig, Germany
fosd201614:40 - 15:10
Rodrigo QueirozUniversity of Waterloo, Canada, Thorsten BergerChalmers University of Technology, Sweden, Krzysztof CzarneckiUniversity of Waterloo, Canada