Skip to Main content Skip to Navigation
Conference papers

Rotten Green Tests

Abstract : Unit tests are a tenant of agile programming methodologies, and are widely used to improve code quality and prevent code regression. A green (passing) test is usually taken as a robust sign that the code under test is valid. However, some green tests contain assertions that are never executed. We call such tests Rotten Green Tests. Rotten Green Tests represent a case worse than a broken test: they report that the code under test is valid, but in fact do not test that validity. We describe an approach to identify rotten green tests by combining simple static and dynamic call-site analyses. Our approach takes into account test helper methods, inherited helpers, and trait compositions, and has been implemented in a tool called DrTest. DrTest reports no false negatives, yet it still reports some false positives due to conditional use or multiple test contexts. Using DrTest we conducted an empirical evaluation of 19,905 real test cases in mature projects of the Pharo ecosystem. The results of the evaluation show that the tool is effective; it detected 294 tests as rotten-green tests that contain assertions that are not executed. Some rotten tests have been "sleeping" in Pharo for at least 5 years.
Document type :
Conference papers
Complete list of metadata

Cited literature [44 references]  Display  Hide  Download
Contributor : Lse Lse Connect in order to contact the contributor
Submitted on : Wednesday, May 22, 2019 - 9:20:53 AM
Last modification on : Friday, January 21, 2022 - 3:11:32 AM


Files produced by the author(s)


  • HAL Id : hal-02002346, version 2



Julien Delplanque, Stéphane Ducasse, Guillermo Polito, Andrew Black, Anne Etien. Rotten Green Tests. ICSE 2019 - International Conference on Software Engineering, May 2019, Montréal, Canada. ⟨hal-02002346v2⟩



Les métriques sont temporairement indisponibles