TY - GEN
T1 - When is BERT Multilingual? Isolating Crucial Ingredients for Cross-lingual Transfer
AU - Deshpande, Ameet
AU - Talukdar, Partha
AU - Narasimhan, Karthik
N1 - Publisher Copyright:
© 2022 Association for Computational Linguistics.
PY - 2022
Y1 - 2022
N2 - While recent work on multilingual language models has demonstrated their capacity for cross-lingual zero-shot transfer, there is a lack of consensus in the community as to what shared properties between languages enable transfer on downstream tasks. Analyses involving pairs of natural languages are often inconclusive and contradictory since languages simultaneously differ in many linguistic aspects. In this paper, we perform a large-scale empirical study to isolate the effects of various linguistic properties by measuring zero-shot transfer between four diverse natural languages and their counterparts constructed by modifying aspects such as the script, word order, and syntax. Among other things, our experiments show that the absence of sub-word overlap significantly affects zero-shot transfer when languages differ in their word order, and there is a strong correlation between transfer performance and word embedding alignment between languages (e.g., ρs = 0.94 on the task of NLI). Our results call for focus in multilingual models on explicitly improving word embedding alignment between languages rather than relying on its implicit emergence.
AB - While recent work on multilingual language models has demonstrated their capacity for cross-lingual zero-shot transfer, there is a lack of consensus in the community as to what shared properties between languages enable transfer on downstream tasks. Analyses involving pairs of natural languages are often inconclusive and contradictory since languages simultaneously differ in many linguistic aspects. In this paper, we perform a large-scale empirical study to isolate the effects of various linguistic properties by measuring zero-shot transfer between four diverse natural languages and their counterparts constructed by modifying aspects such as the script, word order, and syntax. Among other things, our experiments show that the absence of sub-word overlap significantly affects zero-shot transfer when languages differ in their word order, and there is a strong correlation between transfer performance and word embedding alignment between languages (e.g., ρs = 0.94 on the task of NLI). Our results call for focus in multilingual models on explicitly improving word embedding alignment between languages rather than relying on its implicit emergence.
UR - http://www.scopus.com/inward/record.url?scp=85138436839&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85138436839&partnerID=8YFLogxK
U2 - 10.18653/v1/2022.naacl-main.264
DO - 10.18653/v1/2022.naacl-main.264
M3 - Conference contribution
AN - SCOPUS:85138436839
T3 - NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference
SP - 3610
EP - 3623
BT - NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics
PB - Association for Computational Linguistics (ACL)
T2 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022
Y2 - 10 July 2022 through 15 July 2022
ER -