ICC   25427
INSTITUTO DE INVESTIGACION EN CIENCIAS DE LA COMPUTACION
Unidad Ejecutora - UE
congresos y reuniones científicas
Título:
How Do Automatically Generated Unit Tests Influence Software Maintenance?
Autor/es:
SINA SHAMSHIRI; NEIL WALKINSHAW; JOSE MIGUEL ROJAS; GORDON FRASER; JUAN PABLO GALEOTTI
Lugar:
Västerås
Reunión:
Conferencia; 11th IEEE Conference on Software Testing, Validation and Verification; 2018
Institución organizadora:
IEEE
Resumen:
Generating unit tests automatically saves time overwriting tests manually and can lead to higher code coverage.However, automatically generated tests are usually not basedon realistic scenarios, and are therefore generally consideredto be less readable. This places a question mark over theirpractical value: Every time a test fails, a developer has to decidewhether this failure has revealed a regression fault in the programunder test, or whether the test itself needs to be updated. Doesthe fact that automatically generated tests are harder to readoutweigh the time-savings gained by their automated generation,and render them more of a hindrance than a help for softwaremaintenance? In order to answer this question, we performedan empirical study in which participants were presented with anautomatically generated or manually written failing test, andwere asked to identify and fix the cause of the failure. Ourexperiment and two replications resulted in a total of 150 datapoints based on 75 participants. Whilst maintenance activitiestake longer when working with automatically generated tests, wefound developers to be equally effective with manually writtenand automatically generated tests. This has implications on howautomated test generation is best used in practice, and it indicatesa need for research into the generation of more realistic tests.