M. M. Deza and E. Deza, Encyclopedia of distances, Encyclopedia of Distances, pp.1-583, 2009.
DOI : 10.1007/978-3-662-52844-0

T. Shi and S. Horvath, Unsupervised learning with random forest predictors, Journal of Computational and Graphical Statistics, vol.15, issue.1, pp.118-138, 2006.
DOI : 10.1198/106186006x94072

L. Breiman, Random forests. Machine learning, vol.45, pp.5-32, 2001.

B. Percha, Y. Garten, and R. B. Altman, Discovery and explanation of drugdrug interactions via text mining, Pacific Symposium on Biocomputing, pp.410-421, 2012.

M. , Random forest classifier for remote sensing classification, International Journal of Remote Sensing, vol.26, issue.1, pp.217-222, 2005.

J. Friedman, T. Hastie, and R. Tibshirani, The elements of statistical learning, Springer series in statistics, vol.1, 2001.

H. L. Kim, D. Seligson, X. Liu, N. Janzen, M. H. Bui et al., Using tumor markers to predict the survival of patients with metastatic renal cell carcinoma, The Journal of urology, vol.173, issue.5, pp.1496-1501, 2005.

M. C. Abba, H. Sun, K. A. Hawkins, J. A. Drake, Y. Hu et al., Breast cancer molecular signatures as determined by sage: correlation with lymph node status, Molecular Cancer Research, vol.5, issue.9, pp.881-890, 2007.
DOI : 10.1158/1541-7786.mcr-07-0055

URL : http://mcr.aacrjournals.org/content/5/9/881.full.pdf

S. I. Rennard, N. Locantore, B. Delafont, R. Tal-singer, E. K. Silverman et al., Identification of five chronic obstructive pulmonary disease subgroups with different prognoses in the eclipse cohort using cluster analysis, Annals of the American Thoracic Society, vol.12, issue.3, pp.303-312, 2015.

K. Y. Peerbhay, O. Mutanga, and R. Ismail, Random forests unsupervised classification: The detection and mapping of solanum mauritianum infestations in plantation forestry using hyperspectral data, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol.8, issue.6, pp.3107-3122, 2015.

P. Geurts, D. Ernst, and L. Wehenkel, Extremely randomized trees, Machine learning, vol.63, issue.1, pp.3-42, 2006.
DOI : 10.1007/s10994-006-6226-1

URL : https://hal.archives-ouvertes.fr/hal-00341932

W. M. Rand, Objective criteria for the evaluation of clustering methods, Journal of the American Statistical association, vol.66, issue.336, pp.846-850, 1971.

L. Hubert and P. Arabie, Comparing partitions, Journal of classification, vol.2, issue.1, pp.193-218, 1985.
DOI : 10.1007/bf01908075

R. A. Fisher and M. Marshall, Iris data set. RA Fisher, UC Irvine Machine Learning Repository, 1936.

M. Forina, An extendible package for data exploration, classification and correlation. Institute of Pharmaceutical and Food Analysis and Technologies, p.16147, 1991.

O. L. Mangasarian and W. H. Wolberg, Cancer diagnosis via linear programming, 1990.
DOI : 10.1287/opre.43.4.570

URL : https://minds.wisconsin.edu/bitstream/1793/64370/1/94-10.pdf

W. H. Kruskal and W. A. Wallis, Use of ranks in one-criterion variance analysis, Journal of the American statistical Association, vol.47, issue.260, pp.583-621, 1952.

A. Strehl and J. Ghosh, Cluster ensembles-a knowledge reuse framework for combining multiple partitions, Journal of machine learning research, vol.3, pp.583-617, 2002.

H. Elghazel and A. Aussem, Feature selection for unsupervised learning using random cluster ensembles, IEEE 10th International Conference on, pp.168-175, 2010.
DOI : 10.1109/icdm.2010.137

F. Pedregosa, Scikit-learn: Machine learning in Python, Journal of Machine Learning Research, vol.12, pp.2825-2830, 2011.
URL : https://hal.archives-ouvertes.fr/hal-00650905