A Bayesian reassessment of nearest-neighbour classification

Abstract : The k-nearest-neighbor (knn) procedure is a well-known deterministic method used in supervised classification. This article proposes a reassessment of this approach as a statistical technique derived from a proper probabilistic model; in particular, we modify the assessment found in Holmes and Adams, and evaluated by Manocha and Girolami, where the underlying probabilistic model is not completely well defined. Once provided with a clear probabilistic basis for the knn procedure, we derive computational tools for Bayesian inference on the parameters of the corresponding model. In particular, we assess the difficulties inherent to both pseudo-likelihood and path sampling approximations of an intractable normalizing constant. We implement a correct MCMC sampler based on perfect sampling. When perfect sampling is not available, we use instead a Gibbs sampling approximation. Illustrations of the performance of the corresponding Bayesian classifier are provided for benchmark datasets, demonstrating in particular the limitations of the pseudo-likelihood approximation in this set up.
Type de document :
Article dans une revue
Journal of the American Statistical Association, Taylor & Francis, 2009, 104 (485), pp.263-273
Liste complète des métadonnées

Littérature citée [36 références]  Voir  Masquer  Télécharger

Contributeur : Jean-Michel Marin <>
Soumis le : lundi 3 mars 2008 - 13:51:47
Dernière modification le : mercredi 21 novembre 2018 - 16:12:02
Document(s) archivé(s) le : vendredi 25 novembre 2016 - 22:56:54


Fichiers produits par l'(les) auteur(s)


  • HAL Id : inria-00143783, version 4


Lionel Cucala, Jean-Michel Marin, Christian Robert, Mike Titterington. A Bayesian reassessment of nearest-neighbour classification. Journal of the American Statistical Association, Taylor & Francis, 2009, 104 (485), pp.263-273. 〈inria-00143783v4〉



Consultations de la notice


Téléchargements de fichiers