S. Arlot and A. Celisse, A survey of cross-validation procedures for model selection, Statistics Surveys, vol.4, issue.0, pp.40-79, 2010.
DOI : 10.1214/09-SS054

URL : https://hal.archives-ouvertes.fr/hal-00407906

R. Bhatia, Matrix analysis, 2013.
DOI : 10.1007/978-1-4612-0653-8

S. Boucheron, G. Lugosi, and P. Massart, Concentration inequalities: A nonasymptotic theory of independence
DOI : 10.1093/acprof:oso/9780199535255.001.0001

URL : https://hal.archives-ouvertes.fr/hal-00794821

O. Bousquet and A. Elisseeff, Stability and generalization, Journal of Machine Learning Research, 2002.

A. Celisse and T. Mary-huard, New upper bounds on cross-validation for the k-Nearest Neighbor classification rule. arXiv preprint, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01185092

L. Devroye, Exponential Inequalities in Nonparametric Estimation, Nonparametric functional estimation and related topics, pp.31-44, 1991.
DOI : 10.1007/978-94-011-3222-0_3

L. Devroye and T. J. Wagner, Distribution-free performance bounds for potential function rules, IEEE Transactions on Information Theory, vol.25, issue.5, pp.601-604, 1979.
DOI : 10.1109/TIT.1979.1056087

A. Elisseeff, T. Evgeniou, and M. Pontil, Stability of randomized learning algorithms, In Journal of Machine Learning Research, pp.55-79, 2005.

T. Evgeniou, M. Pontil, and A. Elisseeff, Leave One Out Error, Stability, and Generalization of Voting Combinations of Classifiers, Machine Learning, pp.71-97, 2004.
DOI : 10.1023/B:MACH.0000019805.88351.60

J. Friedman, T. Hastie, and R. Tibshirani, The elements of statistical learning, 2009.

H. V. Henderson and S. R. Searle, On Deriving the Inverse of a Sum of Matrices, SIAM Review, vol.23, issue.1, pp.53-60, 1981.
DOI : 10.1137/1023004

S. Kale, R. Kumar, and S. Vassilvitskii, Cross-validation and mean-square stability, Proceedings of the Second Symposium on Innovations in Computer Science (ICS2011), 2011.

M. Kearns and D. Ron, Algorithmic Stability and Sanity-Check Bounds for Leave-One-Out Cross-Validation, Neural Computation, vol.11, issue.6, pp.1427-1453, 1999.
DOI : 10.1103/PhysRevA.45.6056

R. Kumar, D. Lokshtanov, S. Vassilvitskii, and A. Vattani, Near-optimal bounds for cross-validation via loss stability, Proceedings of The 30th International Conference on Machine Learning, pp.27-35, 2013.

S. Kutin and P. Niyogi, Almost-everywhere algorithmic stability and generalization error, Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence, pp.275-282, 2002.

C. Mcdiarmid, On the method of bounded differences, pp.148-188, 1989.
DOI : 10.1017/CBO9781107359949.008

S. Mukherjee, P. Niyogi, T. Poggio, and R. Rifkin, Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization, Advances in Computational Mathematics, vol.18, issue.2, pp.1-3161, 2006.
DOI : 10.1007/s10444-004-7634-z

A. Rakhlin, S. Mukherjee, and T. Poggio, STABILITY RESULTS IN LEARNING THEORY, Analysis and Applications, vol.03, issue.04, pp.397-417, 2005.
DOI : 10.1142/S0219530505000650

S. Shalev-shwartz, O. Shamir, N. Srebro, and K. Sridharan, Learnability, stability and uniform convergence, The Journal of Machine Learning Research, vol.11, pp.2635-2670, 2010.

M. Stone, Cross-validatory choice and assessment of statistical predictions, Journal of the Royal Statistical Society, Series B, vol.36, pp.111-147, 1974.

S. Villa, L. Rosasco, and T. Poggio, On Learnability, Complexity and Stability, Empirical Inference, pp.59-69, 2013.
DOI : 10.1007/978-3-642-41136-6_7

B. Yu, Stability, Bernoulli, vol.19, issue.4, pp.1484-1500, 2013.
DOI : 10.3150/13-BEJSP14

URL : https://hal.archives-ouvertes.fr/hal-00851253