Abstract
Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding theory, we propose a framework for designing sparsity regularization terms which have theoretical and practical advantages when compared with the more standard ℓ 0 or ℓ 1 ones. The presentation of the framework and theoretical foundations is complemented with examples that show its practical advantages in image denoising, zooming and classification.
Original language | English (US) |
---|---|
Article number | 6193205 |
Pages (from-to) | 3850-3864 |
Number of pages | 15 |
Journal | IEEE Transactions on Image Processing |
Volume | 21 |
Issue number | 9 |
DOIs | |
State | Published - 2012 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Software
- Computer Graphics and Computer-Aided Design
Keywords
- Classification
- denoising
- dictionary learning
- sparse coding
- universal coding
- zooming