On the rate of convergence of the bagged nearest neighbor estimate - Archive ouverte HAL Access content directly
Journal Articles Journal of Machine Learning Research Year : 2010

On the rate of convergence of the bagged nearest neighbor estimate

(1, 2) , (3) , (3, 4)
1
2
3
4

Abstract

Bagging is a simple way to combine estimates in order to improve their performance. This method, suggested by Breiman in 1996, proceeds by resampling from the original data set, constructing a predictor from each subsample, and decide by combining. By bagging an n-sample, the crude nearest neighbor regression estimate is turned into a consistent weighted nearest neighbor regression estimate, which is amenable to statistical analysis. Letting the resampling size k_n grows appropriately with n, it is shown that this estimate may achieve optimal rate of convergence, independently from the fact that resampling is done with or without replacement. Since the estimate with the optimal rate of convergence depends on the unknown distribution of the observations, adaptation results by data-splitting are presented.
Not file

Dates and versions

hal-00911992 , version 1 (01-12-2013)

Identifiers

  • HAL Id : hal-00911992 , version 1

Cite

Gérard Biau, Frédéric Cérou, Arnaud Guyader. On the rate of convergence of the bagged nearest neighbor estimate. Journal of Machine Learning Research, 2010, 11, pp.687-712. ⟨hal-00911992⟩
207 View
0 Download

Share

Gmail Facebook Twitter LinkedIn More