On the Rate of Convergence of the Bagged Nearest Neighbor Estimate
Résumé
Bagging is a simple way to combine estimates in order to improve their performance. This method, suggested by Breiman in 1996, proceeds by resampling from the original data set, constructing a predictor from each bootstrap sample, and decide by combining. By bagging an $n$-sample, the crude nearest neighbor regression estimate is turned out into a consistent weighted nearest neighbor regression estimate, which is amenable to statistical analysis. Letting the resampling size $k_n$ grows with $n$ in such a manner that $k_n\to \infty$ and $k_n/n\to 0$, it is shown that this estimate achieves optimal rates of convergence, independently from the fact that resampling is done with or without replacement.
Origine : Fichiers produits par l'(les) auteur(s)
Loading...