A hierarchical Bayesian perspective on majorization-minimization for non-convex sparse regression: application to M/EEG source imaging

Abstract : Majorization-minimization (MM) is a standard iterative optimization technique which consists in minimizing a sequence of convex surrogate functionals. MM approaches have been particularly successful to tackle inverse problems and statistical machine learning problems where the regularization term is a sparsity-promoting concave function. However, due to non-convexity, the solution found by MM depends on its initialization. Uniform initialization is the most natural and often employed strategy as it boils down to penalizing all coefficients equally in the first MM iteration. Yet, this arbitrary choice can lead to unsatisfactory results in severely under-determined inverse problems such as source imaging with magneto- and electro-encephalography (M/EEG). The framework of hierarchical Bayesian modeling (HBM) is an alternative approach to encode sparsity. This work shows that for certain hierarchical models, a simple alternating scheme to compute fully Bayesian maximum a posteriori(MAP) estimates leads to the exact same sequence of updates as a standard MM strategy (see the adaptive lasso). With this parallel outlined, we show how to improve upon these MM techniques by probing the multimodal posterior density using Markov Chain
Complete list of metadatas

https://hal.inria.fr/hal-01970744
Contributor : Alexandre Gramfort <>
Submitted on : Saturday, January 5, 2019 - 11:41:42 PM
Last modification on : Friday, March 8, 2019 - 1:20:25 AM

Identifiers

  • HAL Id : hal-01970744, version 1

Citation

Yousra Bekhti, Felix Lucka, Joseph Salmon, Alexandre Gramfort. A hierarchical Bayesian perspective on majorization-minimization for non-convex sparse regression: application to M/EEG source imaging. Inverse Problems, IOP Publishing, 2018, 34 (8), pp.085010. ⟨hal-01970744⟩

Share

Metrics

Record views

102