Skip to Main content Skip to Navigation
Conference papers

The Iso-regularization Descent Algorithm for the LASSO

Manuel Loth 1, 2 Philippe Preux 1, 2
2 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : Following the introduction by Tibshirani of the LASSO technique for feature selection in regression, two algorithms were proposed by Osborne et al. for solving the associated problem. One is an homotopy method that gained popularity as the LASSO modification of the LARS algorithm. The other is a finite-step descent method that follows a path on the constraint polytope, and seems to have been largely ignored. One of the reason may be that it solves the constrained formulation of the LASSO, as opposed to the more practical regularized formulation. We give here an adaptation of this algorithm that solves the regularized problem, has a simpler formulation, and outperforms state-of-the-art algorithms in terms of speed.
Complete list of metadata
Contributor : Manuel Loth Connect in order to contact the contributor
Submitted on : Monday, September 6, 2010 - 2:41:59 PM
Last modification on : Thursday, January 20, 2022 - 4:16:20 PM
Long-term archiving on: : Thursday, December 1, 2016 - 8:24:52 AM


  • HAL Id : inria-00508257, version 2



Manuel Loth, Philippe Preux. The Iso-regularization Descent Algorithm for the LASSO. 17th International Conference on Neural Information Processing, Nov 2010, Sidney, Australia. ⟨inria-00508257v2⟩



Les métriques sont temporairement indisponibles