Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Fixed points of dictionary learning algorithms for sparse representations

Abstract : This work provides theoretical arguments to compare dictionary learning algorithms for sparse rep- resentations. Three algorithms are considered: Sparsenet, MOD and K-SVD. The main theoretical result is that the fixed points of the Sparsenet and MOD dictionary update stages are the critical points of the residual error energy function (i.e. points with null gradient, not necessarily local minima), whereas the set of K-SVD fixed points is strictly included in the critical point set of the error energy. An example of a point is also provided where Sparsenet and MOD would stop whereas K-SVD can reach a solution with lower residual error. Further experiments show that the result of Sparsenet is a very good starting point for K-SVD. The combination of Sparsenet followed by K-SVD provides a significant improvement in terms of exact recovery rate and approximation quality.
Complete list of metadata

Cited literature [13 references]  Display  Hide  Download
Contributor : Boris Mailhé Connect in order to contact the contributor
Submitted on : Wednesday, April 3, 2013 - 5:48:50 PM
Last modification on : Monday, October 13, 2014 - 3:43:25 PM
Long-term archiving on: : Sunday, April 2, 2017 - 11:53:44 PM


Files produced by the author(s)


  • HAL Id : hal-00807545, version 1



Boris Mailhé, Mark D. Plumbley. Fixed points of dictionary learning algorithms for sparse representations. 2013. ⟨hal-00807545⟩



Record views


Files downloads