J. Alcala-fdez, A. Fernandez, J. Luengo, J. Derrac, and S. Garcia, KEEL Data- Mining Software Tool: Data Set Repository, Integration of Algorithms and Experimental Analysis Framework, Multiple-Valued Logic and Soft Computing, vol.17, issue.2-3, pp.255-287, 2011.

F. Bernardini, M. Monard, and R. Prati, Constructing ensembles of symbolic classifiers, IEEE, 2005.

S. Bharathidason, J. Venkataeswaran, and C. , Improving Classification Accuracy based on Random Forest Model with Uncorrelated High Performing Trees, International Journal of Computer Applications, vol.101, issue.13, pp.26-30, 2014.
DOI : 10.5120/17749-8829

URL : http://doi.org/10.5120/17749-8829

G. Biau, Analysis of a random forests model, The Journal of Machine Learning Research, vol.13, issue.1, pp.1063-1095, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00704947

L. Breiman, Bagging predictors, Machine Learning, vol.10, issue.2, pp.123-140, 1996.
DOI : 10.2307/1403680

L. Breiman, Random Forests, Machine Learning, vol.45, issue.1, pp.5-32, 2001.
DOI : 10.1023/A:1010933404324

J. Demar, Statistical Comparisons of Classifiers over Multiple Data Sets, J. Mach. Learn. Res, vol.7, pp.1-30, 2006.

P. Domingos and M. Pazzani, On the Optimality of the Simple Bayesian Classifier under Zero-One Loss, Machine Learning, vol.29, issue.2/3, pp.103-130, 1997.
DOI : 10.1023/A:1007413511361

M. Fernndez-delgado, E. Cernadas, S. Barro, and D. Amorim, Do we Need Hundreds of Classifiers to Solve Real World Classification Problems, Journal of Machine Learning Research, vol.15, pp.3133-3181, 2014.

H. Finner, On a Monotonicity Problem in Step-Down Multiple Test Procedures, Journal of the American Statistical Association, vol.88, issue.423, pp.920-923, 1993.
DOI : 10.2307/1266545

Y. Freund and R. E. Schapire, others: Experiments with a new boosting algorithm, pp.148-156, 1996.

J. H. Friedman, On Bias, Variance, 0/1?Loss, and the Curse-of-Dimensionality, Data Mining and Knowledge Discovery, vol.1, issue.1, pp.55-77, 1997.
DOI : 10.1023/A:1009778005914

M. Friedman, The Use of Ranks to Avoid the Assumption of Normality Implicit in the Analysis of Variance, Journal of the American Statistical Association, vol.32, issue.200, p.675, 1937.
DOI : 10.1080/01621459.1937.10503522

T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning, 2009.

T. K. Ho, The random subspace method for constructing decision forests. Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.20, issue.8, pp.832-844, 1998.

S. Kotsiantis, Combining bagging, boosting, rotation forest and random subspace methods, Artificial Intelligence Review, vol.26, issue.10, pp.223-240, 2011.
DOI : 10.1016/j.patrec.2005.03.029

D. Opitz and R. Maclin, Popular ensemble methods: An empirical study, Journal of Artificial Intelligence Research, pp.169-198, 1999.

F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion et al., Scikit-learn: Machine Learning in Python, J. Mach. Learn. Res, vol.12, pp.2825-2830, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00650905

M. Robnik-ikonja, Improving Random Forests, Machine Learning: ECML 2004, pp.359-370, 2004.

A. Saffari, C. Leistner, J. Santner, M. Godec, and H. Bischof, On-line Random Forests, 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, pp.1393-1400, 2009.
DOI : 10.1109/ICCVW.2009.5457447

E. K. Tang, P. N. Suganthan, and X. Yao, An analysis of diversity measures, Machine Learning, vol.10, issue.1, pp.247-271, 2006.
DOI : 10.1007/978-1-4757-2440-0

A. Verikas, A. Gelzinis, and M. Bacauskiene, Mining data with random forests: A survey and results of new tests, Pattern Recognition, vol.44, issue.2, pp.330-349, 2011.
DOI : 10.1016/j.patcog.2010.08.011

F. Wilcoxon, Individual Comparisons by Ranking Methods, Biometrics Bulletin, vol.1, issue.6, p.80, 1945.
DOI : 10.2307/3001968

S. J. Winham, R. R. Freimuth, and J. M. Biernacka, A weighted random forests approach to improve predictive performance, Statistical Analysis and Data Mining, vol.10, issue.6, pp.496-505, 2013.
DOI : 10.1093/bib/bbp034

URL : http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3912194