Abstract is missing.
- Factors characterizing reopened issues: a case studyBora Caglayan, Ayse Tosun Misirli, Andriy V. Miranskyy, Burak Turhan, Ayse Bener. 1-10 [doi]
- The scientific basis for prediction researchMartin Shepperd. 1-2 [doi]
- Defect, defect, defect: defect prediction 2.0Sunghun Kim. 1-2 [doi]
- Learning to change projectsRaymond Borges, Tim Menzies. 11-18 [doi]
- DRETOM: developer recommendation based on topic models for bug resolutionXihao Xie, Wen Zhang, Ye Yang, Qing Wang. 19-28 [doi]
- Web effort estimation: the value of cross-company data set compared to single-company data setFilomena Ferrucci, Emilia Mendes, Federica Sarro. 29-38 [doi]
- StatREC: a graphical user interface tool for visual hypothesis testing of cost prediction modelsNikolaos Mittas, Ioannis Mamalikidis, Lefteris Angelis. 39-48 [doi]
- A systematic review of web resource estimationDamir Azhar, Emilia Mendes, Patricia Riddle. 49-58 [doi]
- Alternative methods using similarities in software effort estimationMakrina Viola Kosti, Nikolaos Mittas, Lefteris Angelis. 59-68 [doi]
- Can cross-company data improve performance in software effort estimation?Leandro L. Minku, Xin Yao. 69-78 [doi]
- An adaptive approach with active learning in software fault predictionHuihua Lu, Bojan Cukic. 79-88 [doi]
- Size doesn't matter?: on the value of software size features for effort estimationEkrem Kocaguneli, Tim Menzies, Jairus Hihn, Byeong Ho Kang. 89-98 [doi]
- A cost-benefit model for software quality assurance activitiesTilmann Hampp. 99-108 [doi]
- Comparing the performance of fault prediction models which report multiple performance measures: recomputing the confusion matrixDavid Bowes, Tracy Hall, David Gray. 109-118 [doi]