TY - GEN
T1 - Enriching Word Embeddings with Temporal and Spatial Information
AU - Gong, Hongyu
AU - Bhat, Suma
AU - Viswanath, Pramod
N1 - Publisher Copyright:
© 2020 Association for Computational Linguistics.
PY - 2020
Y1 - 2020
N2 - The meaning of a word is closely linked to sociocultural factors that can change over time and location, resulting in corresponding meaning changes. Taking a global view of words and their meanings in a widely used language, such as English, may require us to capture more refined semantics for use in time-specific or location-aware situations, such as the study of cultural trends or language use. However, popular vector representations for words do not adequately include temporal or spatial information. In this work, we present a model for learning word representation conditioned on time and location. In addition to capturing meaning changes over time and location, we require that the resulting word embeddings retain salient semantic and geometric properties. We train our model on time- and location-stamped corpora, and show using both quantitative and qualitative evaluations that it can capture semantics across time and locations. We note that our model compares favorably with the state-of-the-art for time-specific embedding, and serves as a new benchmark for location-specific embeddings.
AB - The meaning of a word is closely linked to sociocultural factors that can change over time and location, resulting in corresponding meaning changes. Taking a global view of words and their meanings in a widely used language, such as English, may require us to capture more refined semantics for use in time-specific or location-aware situations, such as the study of cultural trends or language use. However, popular vector representations for words do not adequately include temporal or spatial information. In this work, we present a model for learning word representation conditioned on time and location. In addition to capturing meaning changes over time and location, we require that the resulting word embeddings retain salient semantic and geometric properties. We train our model on time- and location-stamped corpora, and show using both quantitative and qualitative evaluations that it can capture semantics across time and locations. We note that our model compares favorably with the state-of-the-art for time-specific embedding, and serves as a new benchmark for location-specific embeddings.
UR - http://www.scopus.com/inward/record.url?scp=85111661707&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85111661707&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85111661707
T3 - CoNLL 2020 - 24th Conference on Computational Natural Language Learning, Proceedings of the Conference
SP - 1
EP - 11
BT - CoNLL 2020 - 24th Conference on Computational Natural Language Learning, Proceedings of the Conference
A2 - Fernandez, Raquel
A2 - Linzen, Tal
PB - Association for Computational Linguistics (ACL)
T2 - 24th Conference on Computational Natural Language Learning, CoNLL 2020
Y2 - 19 November 2020 through 20 November 2020
ER -