TY - GEN
T1 - Just Rotate it
T2 - 15th ACM Workshop on Artificial Intelligence and Security, AISec 2022 - Co-located with CCS 2022
AU - Wu, Tong
AU - Wang, Tianhao
AU - Sehwag, Vikash
AU - Mahloujifar, Saeed
AU - Mittal, Prateek
N1 - Publisher Copyright:
© 2022 Owner/Author.
PY - 2022/11/11
Y1 - 2022/11/11
N2 - Recent works have demonstrated that deep learning models are vulnerable to backdoor poisoning attacks, where these attacks instill spurious correlations to external trigger patterns or objects (e.g., stickers, sunglasses, etc.). We find that such external trigger signals are not necessary, as highly effective backdoors can be easily inserted using rotation-based image transformation. Our method constructs the poisoned dataset by rotating a limited amount of objects and labeling them incorrectly; once trained with it, the victim's model will make undesirable predictions during run-Time inference. It exhibits a significantly high attack success rate while maintaining clean performance through comprehensive empirical studies on image classification and object detection tasks. Furthermore, we evaluate standard data augmentation techniques and five different backdoor defenses against our attack and find that none of them can serve as a consistent mitigation approach. Our attack can be easily deployed in the real world since it only requires rotating the object, as shown in both image classification and object detection applications. Overall, our work highlights a new, simple, physically realizable, and highly effective vector for backdoor attacks. Our video demo is available at https://youtu.be/6JIF8wnX34M
AB - Recent works have demonstrated that deep learning models are vulnerable to backdoor poisoning attacks, where these attacks instill spurious correlations to external trigger patterns or objects (e.g., stickers, sunglasses, etc.). We find that such external trigger signals are not necessary, as highly effective backdoors can be easily inserted using rotation-based image transformation. Our method constructs the poisoned dataset by rotating a limited amount of objects and labeling them incorrectly; once trained with it, the victim's model will make undesirable predictions during run-Time inference. It exhibits a significantly high attack success rate while maintaining clean performance through comprehensive empirical studies on image classification and object detection tasks. Furthermore, we evaluate standard data augmentation techniques and five different backdoor defenses against our attack and find that none of them can serve as a consistent mitigation approach. Our attack can be easily deployed in the real world since it only requires rotating the object, as shown in both image classification and object detection applications. Overall, our work highlights a new, simple, physically realizable, and highly effective vector for backdoor attacks. Our video demo is available at https://youtu.be/6JIF8wnX34M
KW - physically realizable attacks
KW - rotation backdoor attacks
KW - spatial robustness
UR - http://www.scopus.com/inward/record.url?scp=85144027796&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85144027796&partnerID=8YFLogxK
U2 - 10.1145/3560830.3563730
DO - 10.1145/3560830.3563730
M3 - Conference contribution
AN - SCOPUS:85144027796
T3 - AISec 2022 - Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security, co-located with CCS 2022
SP - 91
EP - 102
BT - AISec 2022 - Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security, co-located with CCS 2022
PB - Association for Computing Machinery, Inc
Y2 - 11 November 2022
ER -