TY - GEN
T1 - DeHiB
T2 - 35th AAAI Conference on Artificial Intelligence, AAAI 2021
AU - Yan, Zhicong
AU - Li, Gaolei
AU - Tian, Yuan
AU - Wu, Jun
AU - Li, Shenghong
AU - Chen, Mingzhe
AU - Vincent Poor, H.
N1 - Funding Information:
This research work was funded in part by the National Nature Science Foundation of China (No. U20B2072, 61971283, U20B2048, and 61972255), 2020 Industrial Internet Innovation Development Project of Ministry of Industry and Information Technology of P.R. China “Smart Energy Internet Security Situation Awareness Platform Project”, National Key Research and Development 213 Project, and in part by the U.S. National Science Foundation under Grant CCF-1908308.
Publisher Copyright:
Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2021
Y1 - 2021
N2 - The threat of data-poisoning backdoor attacks on learning algorithms typically comes from the labeled data used for learning. However, in deep semi-supervised learning (SSL), unknown threats mainly stem from unlabeled data. In this paper, we propose a novel deep hidden backdoor (DeHiB) attack for SSL-based systems. In contrast to the conventional attacking methods, the DeHiB can feed malicious unlabeled training data to the semi-supervised learner so as to enable the SSL model to output premeditated results. In particular, a robust adversarial perturbation generator regularized by a unified objective function is proposed to generate poisoned data. To alleviate the negative impact of trigger patterns on model accuracy and improve the attack success rate, a novel contrastive data poisoning strategy is designed. Using the proposed data poisoning scheme, one can implant the backdoor into the SSL model using the raw data without handcrafted labels. Extensive experiments based on CIFAR10 and CIFAR100 datasets demonstrates the effectiveness and crypticity of the proposed scheme.
AB - The threat of data-poisoning backdoor attacks on learning algorithms typically comes from the labeled data used for learning. However, in deep semi-supervised learning (SSL), unknown threats mainly stem from unlabeled data. In this paper, we propose a novel deep hidden backdoor (DeHiB) attack for SSL-based systems. In contrast to the conventional attacking methods, the DeHiB can feed malicious unlabeled training data to the semi-supervised learner so as to enable the SSL model to output premeditated results. In particular, a robust adversarial perturbation generator regularized by a unified objective function is proposed to generate poisoned data. To alleviate the negative impact of trigger patterns on model accuracy and improve the attack success rate, a novel contrastive data poisoning strategy is designed. Using the proposed data poisoning scheme, one can implant the backdoor into the SSL model using the raw data without handcrafted labels. Extensive experiments based on CIFAR10 and CIFAR100 datasets demonstrates the effectiveness and crypticity of the proposed scheme.
UR - http://www.scopus.com/inward/record.url?scp=85124326523&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124326523&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85124326523
T3 - 35th AAAI Conference on Artificial Intelligence, AAAI 2021
SP - 10585
EP - 10593
BT - 35th AAAI Conference on Artificial Intelligence, AAAI 2021
PB - Association for the Advancement of Artificial Intelligence
Y2 - 2 February 2021 through 9 February 2021
ER -