Skip to Main content Skip to Navigation
Conference papers

Some Rates of Convergence for the Selected Lasso Estimator

Pascal Massart 1, 2 Caroline Meynet 1 
2 SELECT - Model selection in statistical learning
Inria Saclay - Ile de France, LMO - Laboratoire de Mathématiques d'Orsay
Abstract : We consider the estimation of a function in some ordered finite or infinite dictionary. We focus on the selected Lasso estimator introduced by Massart and Meynet (2011) as an adaptation of the Lasso suited to deal with infinite dictionaries. We use the oracle inequality established by Massart and Meynet (2011) to derive rates of convergence of this estimator on a wide range of function classes described by interpolation spaces such as in Barron et al. (2008). The results highlight that the selected Lasso estimator is adaptive to the smoothness of the function to be estimated, contrary to the classical Lasso or the greedy algorithm considered by Barron et al. (2008). Moreover, we prove that the rates of convergence of this estimator are optimal in the orthonormal case.
Document type :
Conference papers
Complete list of metadata
Contributor : Erwan Le Pennec Connect in order to contact the contributor
Submitted on : Friday, January 18, 2013 - 5:24:43 PM
Last modification on : Sunday, June 26, 2022 - 11:57:51 AM


  • HAL Id : hal-00778116, version 1



Pascal Massart, Caroline Meynet. Some Rates of Convergence for the Selected Lasso Estimator. ALT 2012 - 23rd International Conference on Algorithmic Learning Theory, Oct 2012, Lyon, France. pp.17--33. ⟨hal-00778116⟩



Record views