TY - GEN
T1 - Negative Sample is Negative in Its Own Way
T2 - 2022 Findings of the Association for Computational Linguistics: NAACL 2022
AU - Fan, Zhihao
AU - Wei, Zhongyu
AU - Li, Zejun
AU - Wang, Siyuan
AU - Huang, Xuanjing
AU - Fan, Jianqing
N1 - Publisher Copyright:
© Findings of the Association for Computational Linguistics: NAACL 2022 - Findings.
PY - 2022
Y1 - 2022
N2 - Matching model is essential for Image-Text Retrieval framework. Existing research usually train the model with a triplet loss and explore various strategy to retrieve hard negative sentences in the dataset. We argue that current retrieval-based negative sample construction approach is limited in the scale of the dataset thus fail to identify negative sample of high difficulty for every image. We propose our TAiloring neGative Sentences with Discrimination and Correction (TAGS-DC) to generate synthetic sentences automatically as negative samples. TAGS-DC is composed of masking and refilling to generate synthetic negative sentences with higher difficulty. To keep the difficulty during training, we mutually improve the retrieval and generation through parameter sharing. To further utilize fine-grained semantic of mismatch in the negative sentence, we propose two auxiliary tasks, namely word discrimination and word correction to improve the training. In experiments, we verify the effectiveness of our model on MS-COCO and Flickr30K compared with current state-of-theart models and demonstrates its robustness and faithfulness in the further analysis.
AB - Matching model is essential for Image-Text Retrieval framework. Existing research usually train the model with a triplet loss and explore various strategy to retrieve hard negative sentences in the dataset. We argue that current retrieval-based negative sample construction approach is limited in the scale of the dataset thus fail to identify negative sample of high difficulty for every image. We propose our TAiloring neGative Sentences with Discrimination and Correction (TAGS-DC) to generate synthetic sentences automatically as negative samples. TAGS-DC is composed of masking and refilling to generate synthetic negative sentences with higher difficulty. To keep the difficulty during training, we mutually improve the retrieval and generation through parameter sharing. To further utilize fine-grained semantic of mismatch in the negative sentence, we propose two auxiliary tasks, namely word discrimination and word correction to improve the training. In experiments, we verify the effectiveness of our model on MS-COCO and Flickr30K compared with current state-of-theart models and demonstrates its robustness and faithfulness in the further analysis.
UR - http://www.scopus.com/inward/record.url?scp=85137356035&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85137356035&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85137356035
T3 - Findings of the Association for Computational Linguistics: NAACL 2022 - Findings
SP - 2667
EP - 2678
BT - Findings of the Association for Computational Linguistics
PB - Association for Computational Linguistics (ACL)
Y2 - 10 July 2022 through 15 July 2022
ER -