TY - GEN
T1 - A theoretical analysis of contrastive unsupervised representation learning
AU - Arora, Sanjeev
AU - Khandeparkar, Hrishikesh
AU - Khodak, Mikhail
AU - Plevrakis, Orestis
AU - Saunshi, Nikunj
N1 - Publisher Copyright:
© 2019 International Machine Learning Society (IMLS).
PY - 2019
Y1 - 2019
N2 - Recent empirical works have successfully used unlabeled data to learn feature representations that are broadly useful in downstream classification tasks. Several of these methods are reminiscent of the well-known word2vec embedding algorithm: leveraging availability of pairs of se-mantically "similar" data points and "negative samples," the learner forces the inner product of representations of similar pairs with each other to be higher on average than with negative samples. The current paper uses the term contrastive learning for such algorithms and presents a theoretical framework for analyzing them by introducing latent classes and hypothesizing that semantically similar points are sampled from the same latent class. This framework allows us to show provable guarantees on the performance of the learned representations on the average classification task that is comprised of a subset of the same set of latent classes. Our generalization bound also shows that learned representations can reduce (labeled) sample complexity on downstream tasks. We conduct controlled experiments in both the text and image domains to support the theory. This work is supported by NSF, ONR, the Simons Foundation, the Schmidt Foundation, Mozilla Research, Amazon Research, DARPA, and SRC. We thank Michael Collins, Rong Ge, Elad Hazan, Sham Kakade, Karthik Narasimhan, Karan Singh, and Yi Zhang for helpful discussions and suggestions.
AB - Recent empirical works have successfully used unlabeled data to learn feature representations that are broadly useful in downstream classification tasks. Several of these methods are reminiscent of the well-known word2vec embedding algorithm: leveraging availability of pairs of se-mantically "similar" data points and "negative samples," the learner forces the inner product of representations of similar pairs with each other to be higher on average than with negative samples. The current paper uses the term contrastive learning for such algorithms and presents a theoretical framework for analyzing them by introducing latent classes and hypothesizing that semantically similar points are sampled from the same latent class. This framework allows us to show provable guarantees on the performance of the learned representations on the average classification task that is comprised of a subset of the same set of latent classes. Our generalization bound also shows that learned representations can reduce (labeled) sample complexity on downstream tasks. We conduct controlled experiments in both the text and image domains to support the theory. This work is supported by NSF, ONR, the Simons Foundation, the Schmidt Foundation, Mozilla Research, Amazon Research, DARPA, and SRC. We thank Michael Collins, Rong Ge, Elad Hazan, Sham Kakade, Karthik Narasimhan, Karan Singh, and Yi Zhang for helpful discussions and suggestions.
UR - http://www.scopus.com/inward/record.url?scp=85078009380&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85078009380&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85078009380
T3 - 36th International Conference on Machine Learning, ICML 2019
SP - 9904
EP - 9923
BT - 36th International Conference on Machine Learning, ICML 2019
PB - International Machine Learning Society (IMLS)
T2 - 36th International Conference on Machine Learning, ICML 2019
Y2 - 9 June 2019 through 15 June 2019
ER -