TY - GEN
T1 - Group sparse coding
AU - Bengio, Samy
AU - Pereira, Fernando
AU - Singer, Yoram
AU - Strelow, Dennis
PY - 2009
Y1 - 2009
N2 - Bag-of-words document representations are often used in text, image and video processing. While it is relatively easy to determine a suitable word dictionary for text documents, there is no simple mapping from raw images or videos to dictionary terms. The classical approach builds a dictionary using vector quantization over a large set of useful visual descriptors extracted from a training set, and uses a nearest-neighbor algorithm to count the number of occurrences of each dictionary word in documents to be encoded. More robust approaches have been proposed recently that represent each visual descriptor as a sparse weighted combination of dictionary words. While favoring a sparse representation at the level of visual descriptors, those methods however do not ensure that images have sparse representation. In this work, we use mixed-norm regularization to achieve sparsity at the image level as well as a small overall dictionary. This approach can also be used to encourage using the same dictionary words for all the images in a class, providing a discriminative signal in the construction of image representations. Experimental results on a benchmark image classification dataset show that when compact image or dictionary representations are needed for computational efficiency, the proposed approach yields better mean average precision in classification.
AB - Bag-of-words document representations are often used in text, image and video processing. While it is relatively easy to determine a suitable word dictionary for text documents, there is no simple mapping from raw images or videos to dictionary terms. The classical approach builds a dictionary using vector quantization over a large set of useful visual descriptors extracted from a training set, and uses a nearest-neighbor algorithm to count the number of occurrences of each dictionary word in documents to be encoded. More robust approaches have been proposed recently that represent each visual descriptor as a sparse weighted combination of dictionary words. While favoring a sparse representation at the level of visual descriptors, those methods however do not ensure that images have sparse representation. In this work, we use mixed-norm regularization to achieve sparsity at the image level as well as a small overall dictionary. This approach can also be used to encourage using the same dictionary words for all the images in a class, providing a discriminative signal in the construction of image representations. Experimental results on a benchmark image classification dataset show that when compact image or dictionary representations are needed for computational efficiency, the proposed approach yields better mean average precision in classification.
UR - http://www.scopus.com/inward/record.url?scp=84858744056&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84858744056&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84858744056
SN - 9781615679119
T3 - Advances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference
SP - 82
EP - 89
BT - Advances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference
PB - Neural Information Processing Systems
T2 - 23rd Annual Conference on Neural Information Processing Systems, NIPS 2009
Y2 - 7 December 2009 through 10 December 2009
ER -