Rotten Green Tests

Abstract : Unit tests are a tenant of agile programming methodologies, and are widely used to improve code quality and prevent code regression. A green (passing) test is usually taken as a robust sign that the code under test is valid. However, some green tests contain assertions that are never executed. We call such tests Rotten Green Tests. Rotten Green Tests represent a case worse than a broken test: they report that the code under test is valid, but in fact do not test that validity. We describe an approach to identify rotten green tests by combining simple static and dynamic call-site analyses. Our approach takes into account test helper methods, inherited helpers, and trait compositions, and has been implemented in a tool called DrTest. DrTest reports no false negatives, yet it still reports some false positives due to conditional use or multiple test contexts. Using DrTest we conducted an empirical evaluation of 19,905 real test cases in mature projects of the Pharo ecosystem. The results of the evaluation show that the tool is effective; it detected 294 tests as rotten-green tests that contain assertions that are not executed. Some rotten tests have been "sleeping" in Pharo for at least 5 years.
Document type :
Conference papers
Complete list of metadatas

Cited literature [44 references]  Display  Hide  Download

https://hal.inria.fr/hal-02002346
Contributor : Lse Lse <>
Submitted on : Thursday, January 31, 2019 - 4:03:08 PM
Last modification on : Thursday, May 23, 2019 - 1:37:35 AM
Long-term archiving on : Wednesday, May 1, 2019 - 8:37:19 PM

File

final.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02002346, version 1

Citation

Julien Delplanque, Stéphane Ducasse, Guillermo Polito, Andrew Black, Anne Etien. Rotten Green Tests. ICSE 2019 - International Conference on Software Engineering, May 2019, Montréal, Canada. ⟨hal-02002346v1⟩

Share

Metrics

Record views

88

Files downloads

112