Universal priors for sparse modeling

Ignacio Raḿrez, Federico Lecumberry, Guillermo Sapiro

Research output: Chapter in Book/Report/Conference proceedingConference contribution

22 Scopus citations

Abstract

Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. In this work, we use tools from information theory to propose a sparsity regularization term which has several theoretical and practical advantages over the more standard ℓ0 or ℓ1 ones, and which leads to improved coding performance and accuracy in reconstruction tasks. We also briefly report on further improvements obtained by imposing low mutual coherence and Gram matrix norm on the learned dictionaries.

Original languageEnglish (US)
Title of host publicationCAMSAP 2009 - 2009 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing
Pages197-200
Number of pages4
DOIs
StatePublished - 2009
Externally publishedYes
Event2009 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMSAP 2009 - Aruba, Netherlands
Duration: Dec 13 2009Dec 16 2009

Publication series

NameCAMSAP 2009 - 2009 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing

Other

Other2009 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMSAP 2009
Country/TerritoryNetherlands
CityAruba
Period12/13/0912/16/09

All Science Journal Classification (ASJC) codes

  • Computational Theory and Mathematics
  • Computer Networks and Communications
  • Software

Fingerprint

Dive into the research topics of 'Universal priors for sparse modeling'. Together they form a unique fingerprint.

Cite this