Abstract is missing.
- Is there a future for empirical software engineering?Victor R. Basili. 1 [doi]
- Comparing the fault-proneness of new and modified code: an industrial case studyPiotr Tomaszewski, Lars-Ola Damm. 2-7 [doi]
- Predicting fault-prone components in a java legacy systemErik Arisholm, Lionel C. Briand. 8-17 [doi]
- Predicting component failures at design timeAdrian Schröter, Thomas Zimmermann, Andreas Zeller. 18-27 [doi]
- Analysis of the influence of communication between researchers on experiment replicationSira Vegas, Natalia Juristo Juzgado, Ana María Moreno, Martín Solari, Patricio Letelier. 28-37 [doi]
- Evaluating guidelines for empirical software engineering studiesBarbara A. Kitchenham, Hiyam Al-Kilidar, Muhammad Ali Babar, Mike Berry, Karl Cox, Jacky Keung, Felicia Kurniawati, Mark Staples, He Zhang, Liming Zhu. 38-47 [doi]
- Using observational pilot studies to test and improve lab packagesManoel G. Mendonça, Daniela Cruzes, Josemeire Dias, Maria Cristina Ferreira de Oliveira. 48-57 [doi]
- A framework for the analysis of software cost estimation accuracyStein Grimstad, Magne Jørgensen. 58-65 [doi]
- A comparative study of attribute weighting heuristics for effort estimation by analogyJingzhou Li, Günther Ruhe. 66-74 [doi]
- Cross-company and single-company effort models using the ISBSG database: a further replicated studyChris Lokan, Emilia Mendes. 75-84 [doi]
- An empirical comparison between pair development and software inspection in ThailandMonvorath Phongpaibul, Barry W. Boehm. 85-94 [doi]
- PBR vs. checklist: a replication in the n-fold inspection contextLulu He, Jeffrey C. Carver. 95-104 [doi]
- An empirical analysis and comparison of random testing techniquesJohannes Mayer, Christoph Schneckenburger. 105-114 [doi]
- Defects in automotive use casesFredrik Törner, Martin Ivarsson, Fredrik Pettersson, Peter Öhman. 115-123 [doi]
- A case study on the application of UML in legacy developmentBente Anda, Kai Hansen. 124-133 [doi]
- Documenting design decision rationale to improve individual and team design decision making: an experimental evaluationDavide Falessi, Giovanni Cantone, Martin Becker. 134-143 [doi]
- Successful software project and products: An empirical investigationRichard Berntsson-Svensson, Aybüke Aurum. 144-153 [doi]
- Predicting good requirements for in-house development projectsJune M. Verner, Karl Cox, Steven J. Bleistein. 154-163 [doi]
- Agile customer engagement: a longitudinal qualitative case studyGeir Kjetil Hanssen, Tor Erlend Fægri. 164-173 [doi]
- Maximising the information gained from an experimental analysis of code inspection and static analysis for concurrent java componentsMargaret A. Wojcicki, Paul A. Strooper. 174-183 [doi]
- Testing and inspecting reusable product line components: first empirical resultsChristian Denger, Ronny Kolb. 184-193 [doi]
- A literature survey of the quality economics of defect-detection techniquesStefan Wagner. 194-203 [doi]
- The evolution of FreeBSD and LinuxClemente Izurieta, James M. Bieman. 204-211 [doi]
- A family of empirical studies to compare informal and optimization-based planning of software releasesGengshen Du, Jim McElroy, Günther Ruhe. 212-221 [doi]
- Empirical estimates of software availability of deployed systemsAudris Mockus. 222-231 [doi]
- A follow up study of the effect of personality on the performance of software engineering teamsJohn Karn, Tony Cowling. 232-241 [doi]
- An empirical study of developers views on software reuse in statoil ASAOdd Petter N. Slyngstad, Anita Gupta, Reidar Conradi, Parastoo Mohagheghi, Harald Rønneberg, Einar Landre. 242-251 [doi]
- Distributed versus face-to-face meetings for architecture evalution: a controlled experimentMuhammad Ali Babar, Barbara A. Kitchenham, D. Ross Jeffery. 252-261 [doi]
- Improving software testing by observing practiceOssi Taipale, Kari Smolander. 262-271 [doi]
- An industrial case study of structural testing applied to safety-critical embedded softwareJing Guan, Jeff Offutt, Paul Ammann. 272-277 [doi]
- An empirical evaluation of a testing and debugging methodology for ExcelJeffrey Carver, Marc Fisher II, Gregg Rothermel. 278-287 [doi]
- Common refactorings, a dependency graph and some code smells: an empirical study of Java OSSSteve Counsell, Youssef Hassoun, George Loizou, Rajaa Najjar. 288-296 [doi]
- Drivers for software refactoring decisionsMika Mäntylä, Casper Lassenius. 297-306 [doi]
- Eliciting better quality architecture evaluation scenarios: a controlled experiment on top-down vs. bottom-upMuhammad Ali Babar, Stefan Biffl. 307-315 [doi]
- A goal question metric based approach for efficient measurement framework definitionPatrik Berander, Per Jönsson. 316-325 [doi]
- Evaluating the practical use of different measurement scales in requirements prioritisationLena Karlsson, Martin Höst, Björn Regnell. 326-335 [doi]
- Requirement error abstraction and classification: an empirical studyGursimran Singh Walia, Jeffrey Carver, Thomas Philip. 336-345 [doi]
- Identifying domain-specific defect classes using inspections and change historyTaiga Nakamura, Lorin Hochstein, Victor R. Basili. 346-355 [doi]
- Evaluating the efficacy of test-driven development: industrial case studiesThirumalesh Bhat, Nachiappan Nagappan. 356-363 [doi]
- Evaluating advantages of test driven development: a controlled experiment with professionalsGerardo Canfora, Aniello Cimitile, Félix García, Mario Piattini, Corrado Aaron Visaggio. 364-371 [doi]