Universal regularizers for robust sparse coding and modeling

Ignacio Ramírez, Guillermo Sapiro

Research output: Contribution to journalArticlepeer-review

26 Scopus citations

Abstract

Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding theory, we propose a framework for designing sparsity regularization terms which have theoretical and practical advantages when compared with the more standard ℓ 0 or ℓ 1 ones. The presentation of the framework and theoretical foundations is complemented with examples that show its practical advantages in image denoising, zooming and classification.

Original languageEnglish (US)
Article number6193205
Pages (from-to)3850-3864
Number of pages15
JournalIEEE Transactions on Image Processing
Volume21
Issue number9
DOIs
StatePublished - 2012
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Keywords

  • Classification
  • denoising
  • dictionary learning
  • sparse coding
  • universal coding
  • zooming

Fingerprint

Dive into the research topics of 'Universal regularizers for robust sparse coding and modeling'. Together they form a unique fingerprint.

Cite this