On bayesian estimation and proximity operators - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Article Dans Une Revue Applied and Computational Harmonic Analysis Année : 2021

On bayesian estimation and proximity operators

Résumé

There are two major routes to address the ubiquitous family of inverse problems appearing in signal and image processing, such as denoising or deblurring. A first route relies on Bayesian modeling, where prior probabilities are used to embody models of both the distribution of the unknown variables and their statistical dependence with respect to the observed data. The estimation process typically relies on the minimization of an expected loss (e.g. minimum mean squared error, or MMSE). The second route has received much attention in the context of sparse regularization and compressive sensing: it consists in designing (often convex) optimization problems involving the sum of a data delity term and a penalty term promoting certain types of unknowns (e.g., sparsity, promoted through an `1 norm). Well known relations between these two approaches have led to some widely spread mis-conceptions. In particular, while the so-called Maximum A Posterori (MAP) estimate with a Gaussian noise model does lead to an optimization problem with a quadratic data-fidelity term, we disprove through explicit examples the common belief that the converse would be true. It has already been shown [7, 9] that for denoising in the presence of additive Gaussian noise, for any prior probability on the unknowns, MMSE estimation can be expressed as a penalized least squares problem, with the apparent characteristics of a MAP estimation problem with Gaussian noise and a (generally) different prior on the unknowns. In other words, the variational approach is rich enough to build all possible MMSE estimators associated to additive Gaussian noise via a well chosen penalty. We generalize these results beyond Gaussian denoising and characterize noise models for which the same phenomenon occurs. In particular, we prove that with (a variant of) Poisson noise and any prior probability on the unknowns, MMSE estimation can again be expressed as the solution of a penalized least squares optimization problem. For additive scalar denoising the phenomenon holds if and only if the noise distribution is log-concave. In particular, Laplacian denoising can (perhaps surprisingly) be expressed as the solution of a penalized least squares problem. In the multivariate case, the same phenomenon occurs when the noise model belongs to a particular subset of the exponential family. For multivariate additive denoising, the phenomenon holds if and only if the noise is white and Gaussian.
Fichier principal
Vignette du fichier
part2.pdf (1011.72 Ko) Télécharger le fichier
figL1L1.pdf (604.79 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01835108 , version 1 (11-07-2018)
hal-01835108 , version 2 (11-07-2018)
hal-01835108 , version 3 (17-04-2019)

Identifiants

Citer

Rémi Gribonval, Mila Nikolova. On bayesian estimation and proximity operators. Applied and Computational Harmonic Analysis, 2021, 50, pp.49-72. ⟨10.1016/j.acha.2019.07.002⟩. ⟨hal-01835108v3⟩
302 Consultations
573 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More