D. Astels, Test-Driven Development-A Practical Guide, 2003.

B. Baudry, F. Fleurey, J. Jézéquel, and Y. L. Traon, Automatic test case optimization: A bacteriologic algorithm, IEEE Software, vol.22, issue.2, pp.76-82, 2005.
DOI : 10.1109/ms.2005.30

B. Baudry, F. Fleurey, and Y. L. Traon, Improving test suites for efficient fault localization, ICSE '06: Proceeding of the 28th international conference on Software engineering, pp.82-91, 2006.
DOI : 10.1145/1134285.1134299

URL : https://hal.archives-ouvertes.fr/inria-00542783

G. Bavota, A. Qusef, R. Oliveto, A. D. Lucia, and D. Binkley, An empirical analysis of the distribution of unit test smells and their impact on software maintenance, International Conference on Software Maintenance (ICSM), pp.56-65, 2012.

K. Beck and C. Andres, Extreme Programming Explained: Embrace Change, 2004.

A. Beszedes, T. Gergely, L. Schrettner, J. Jasz, L. Lango et al., Code Coverage-based Regression Test Selection and Prioritization in WebKit, 28th IEEE International Conference on, pp.46-55, 2012.
DOI : 10.1109/icsm.2012.6405252

A. P. Black, S. Ducasse, O. Nierstrasz, D. Pollet, D. Cassou et al., Pharo by Example. Square Bracket Associates, 2009.

V. Blondeau, A. Etien, N. Anquetil, S. Cresson, P. Croisy et al., Test case selection in industry: An analysis of issues related to static approaches, Software Quality Journal, pp.1-35, 2016.
URL : https://hal.archives-ouvertes.fr/hal-01344842

E. Bodden, A. Sewe, J. Sinschek, H. Oueslati, and M. Mezini, Taming reflection: Aiding static analysis in the presence of reflection and custom class loaders, Proceedings of the 33rd International Conference on Software Engineering, ICSE '11, pp.241-250, 2011.

D. Bowes, H. Tracy, J. Petrié, T. Shippey, and B. Turhan, How good are my tests?, Workshop on Emerging Trends in Software Metrics (WETSoM). IEEE/ACM, 2017.
DOI : 10.1109/wetsom.2017.2

M. Breugelmans and B. Van-rompaey, TestQ: Exploring structural and maintenance characteristics of unit test suites, International Workshop on Advanced Software Development Tools and Techniques (WASDeTT), 2008.

D. Cassou, S. Ducasse, L. Fabresse, J. Fabry, and S. Van-caekenberghe, Enterprise Pharo: a Web Perspective. Square Bracket Associates, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01223026

C. Csallner and Y. Smaragdakis, Jcrasher: an automatic robust tester for java. Software: Practice and Experience, vol.43, 2004.
DOI : 10.1002/spe.602

B. Daniel, D. Dig, T. Gvero, V. Jagannath, J. Jiaa et al., Reassert: A tool for repairing broken unit tests, Proceedings of the 33rd International Conference on Software Engineering, ICSE '11, pp.1010-1012, 2011.
DOI : 10.1109/ase.2009.17

URL : http://mir.cs.illinois.edu/reassert/pubs/reassert.pdf

R. A. Demillo, R. J. Lipton, and F. G. Sayward, Hints on test data selection: Help for the practicing programmer, Computer, vol.11, issue.4, pp.34-41, 1978.

M. Denker, Sub-method Structural and Behavioral Reflection, 2008.
URL : https://hal.archives-ouvertes.fr/tel-00555937

A. Deursen, L. Moonen, A. Bergh, and G. Kok, Refactoring test code, Proceedings of the 2nd International Conference on Extreme Programming and Flexible Processes (XP2001), pp.92-95, 2001.

S. Ducasse, O. Nierstrasz, N. Schärli, R. Wuyts, and A. P. Black, Traits: A mechanism for fine-grained reuse, ACM Transactions on Programming Languages and Systems (TOPLAS), vol.28, issue.2, pp.331-388, 2006.

S. Ducasse, D. Pollet, A. Bergel, and D. Cassou, Reusing and composing tests with traits, TOOLS'09: Proceedings of the 47th International Conference on Objects, Models, Components, Patterns, pp.252-271, 2009.
URL : https://hal.archives-ouvertes.fr/inria-00403568

M. Fowler, K. Beck, J. Brant, W. Opdyke, and D. Roberts, Refactoring: Improving the Design of Existing Code, 1999.

M. Gaelli, M. Lanza, O. Nierstrasz, and R. Wuyts, Ordering broken unit tests for focused debugging, 20th International Conference on Software Maintenance (ICSM 2004), pp.114-123, 2004.

M. Gligoric, A. Groce, C. Zhang, R. Sharma, M. A. Alipour et al., Comparing non-adequate test suites using coverage criteria, International Symposium on Software Testing and Analysis, 2013.

K. Herzig and N. Nagappan, Empirically detecting false test alarms using association rules, International Conference on Software Engineering, 2015.

C. Huo and J. Clause, Improving oracle quality by detecting brittle assertions and unused inputs in tests, Foundations on Software Engineering, 2014.

L. Inozemtseva and R. Holmes, Coverage is not strongly correlated with test suite effectiveness, International Conference on Software Engineering, 2014.

R. Lingampally, A. Gupta, and P. Jalote, A multipurpose code coverage tool for java, System Sciences, 2007. HICSS 2007. 40th Annual Hawaii International Conference on, pp.261-261, 2007.

A. M. Memon and Q. Xie, Empirical evaluation of the fault-detection effectiveness of smoke regression test cases for gui-based software, IEEE International Conference on Software Maintenance, pp.8-17, 2004.

G. Meszaros, XUnit Test Patterns-Refactoring Test Code, 2007.

G. Meszaros, S. Smith, and J. Andrea, The test automation manifesto, Proceedings of the Third XP and Second Agile Universe Conference, pp.73-81, 2003.

A. Mockus, N. Nagappan, and T. T. Dinh-trong, Test coverage and post-verification defects: A multiple case study, Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement, ESEM '09, pp.291-301, 2009.

R. Niedermayr, E. Juergens, and S. Wagne, Will my tests tell me if i break this code?, International Workshop on Continuous Software Evolution and Delivery, pp.23-29, 2016.

L. S. Pinto, S. Sinha, and A. Orso, Understanding myths and realities of test-suite evolution, International Conference on Software Engineering, 2012.

S. M. Poulding and R. Feldt, Generating controllably invalid and atypical inputs for robustness testing, IEEE International Conference on Software Testing, Verification and Validation Workshops, pp.81-84, 2017.

S. Reichhart, T. Gîrba, and S. Ducasse, Rule-based assessment of test quality, Special Issue. Proceedings of TOOLS Europe, vol.6, pp.231-251, 2007.

B. V. Rompaey, B. D. Bois, and S. Demeyer, Characterizing the relative significance of a test smell, pp.391-400, 2006.

B. V. Rompaey, B. D. Bois, and S. Demeyer, Improving test code reviews with metrics: a pilot study, 2006.

D. Schuler and A. Zeller, Checked coverage: an indicator for oracle quality. Software testing, verification and reliability, vol.23, pp.531-551, 2013.

A. Shahrokni and R. Feldt, Robustest: Towards a framework for automated testing of robustness in software, International Conference on Advances in System Testing and Validation LifeCycle, 2011.

S. A. Spoon and O. Shivers, Demand-driven type inference with subgoal pruning: Trading precision for scalability, Proceedings of ECOOP'04, pp.51-74, 2004.

É. Tanter, J. Noyé, D. Caromel, and P. Cointe, Partial behavioral reflection: Spatial and temporal selection of reification, Proceedings of OOPSLA '03, ACM SIGPLAN Notices, pp.27-46, 2003.
URL : https://hal.archives-ouvertes.fr/hal-00457204

N. Tillmann and W. Schulte, Parameterized unit tests, ESEC/SIGSOFT FSE, pp.253-262, 2005.

B. Van-rompaey, B. D. Bois, S. Demeyer, and M. Rieger, On the detection of test smells: A metricsbased approach for general fixture and eager test, Transactions on Software Engineering, vol.33, issue.12, pp.800-817, 2007.

O. Vera-perez, B. Danglot, M. Monperrus, and B. Baudry, A comprehensive study of pseudo-tested methods, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01867423

J. Waletzky, Smoke tests vs. BVTs. Crosslake Tech Blog, 2012.