TY - JOUR
T1 - POSTER
T2 - 25th ACM Conference on Computer and Communications Security, CCS 2018
AU - Sehwag, Vikash
AU - Sitawarin, Chawin
AU - Bhagoji, Arjun Nitin
AU - Mosenia, Arsalan
AU - Chiang, Mung
AU - Mittal, Prateek
N1 - Funding Information:
This work was supported by the National Science Foundation under grants CNS-1553437 and CNS 1409415, by the Office of Naval Research (ONR) through Young Investigator Prize (YIP), by IBM through IBM Faculty award and by Intel through Intel Faculty Research Award.
Publisher Copyright:
© 2018 Copyright held by the owner/author(s).
PY - 2018
Y1 - 2018
N2 - Deep neural networks (DNNs) have enabled success in learning tasks such as image classification, semantic image segmentation and steering angle prediction which can be key components of the computer vision pipeline of safety-critical systems such as autonomous vehicles. However, previous work has demonstrated the feasibility of using physical adversarial examples to attack image classification systems. In this work, we argue that the success of realistic adversarial examples is highly dependent on both the structure of the training data and the learning objective. In particular, realistic, physicalworld attacks on semantic segmentation and steering angle prediction constrain the adversary to add localized perturbations, since it is very difficult to add perturbations in the entire field of view of input sensors such as cameras for applications like autonomous vehicles. We empirically study the effectiveness of adversarial examples generated under strict locality constraints imposed by the aforementioned applications. Even with image classification, we observe that the success of the adversary under locality constraints depends on the training dataset. With steering angle prediction, we observe that adversarial perturbations localized to an off-road patch are significantly less successful compared to those on-road. For semantic segmentation, we observe that perturbations localized to small patches are only effective at changing the label in and around those patches, making non-local attacks difficult for an adversary. We further provide a comparative evaluation of these localized attacks over various datasets and deep learning models for each task.
AB - Deep neural networks (DNNs) have enabled success in learning tasks such as image classification, semantic image segmentation and steering angle prediction which can be key components of the computer vision pipeline of safety-critical systems such as autonomous vehicles. However, previous work has demonstrated the feasibility of using physical adversarial examples to attack image classification systems. In this work, we argue that the success of realistic adversarial examples is highly dependent on both the structure of the training data and the learning objective. In particular, realistic, physicalworld attacks on semantic segmentation and steering angle prediction constrain the adversary to add localized perturbations, since it is very difficult to add perturbations in the entire field of view of input sensors such as cameras for applications like autonomous vehicles. We empirically study the effectiveness of adversarial examples generated under strict locality constraints imposed by the aforementioned applications. Even with image classification, we observe that the success of the adversary under locality constraints depends on the training dataset. With steering angle prediction, we observe that adversarial perturbations localized to an off-road patch are significantly less successful compared to those on-road. For semantic segmentation, we observe that perturbations localized to small patches are only effective at changing the label in and around those patches, making non-local attacks difficult for an adversary. We further provide a comparative evaluation of these localized attacks over various datasets and deep learning models for each task.
KW - Adversarial examples
KW - Computer vision
KW - Deep learning
UR - http://www.scopus.com/inward/record.url?scp=85087340303&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85087340303&partnerID=8YFLogxK
U2 - 10.1145/3243734.3278515
DO - 10.1145/3243734.3278515
M3 - Conference article
AN - SCOPUS:85087340303
SN - 1543-7221
VL - 2018-January
SP - 2285
EP - 2287
JO - Proceedings of the ACM Conference on Computer and Communications Security
JF - Proceedings of the ACM Conference on Computer and Communications Security
Y2 - 15 October 2018
ER -