DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks

Chong Xiang, Prateek Mittal

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

State-of-the-art object detectors are vulnerable to localized patch hiding attacks, where an adversary introduces a small adversarial patch to make detectors miss the detection of salient objects. The patch attacker can carry out a physical-world attack by printing and attaching an adversarial patch to the victim object; thus, it imposes a challenge for the safe deployment of object detectors. In this paper, we propose DetectorGuard as the first general framework for building provably robust object detectors against localized patch hiding attacks. DetectorGuard is inspired by recent advancements in robust image classification research; we ask: can we adapt robust image classifiers for robust object detection? Unfortunately, due to their task difference, an object detector naively adapted from a robust image classifier 1) may not necessarily be robust in the adversarial setting or 2) even maintain decent performance in the clean setting. To address these two issues and build a high-performance robust object detector, we propose an objectness explaining strategy: we adapt a robust image classifier to predict objectness (i.e., the probability of an object being present) for every image location and then explain each objectness using the bounding boxes predicted by a conventional object detector. If all objectness is well explained, we output the predictions made by the conventional object detector; otherwise, we issue an attack alert. Notably, our objectness explaining strategy enables provable robustness for "free": 1) in the adversarial setting, we formally prove the end-to-end robustness of DetectorGuard on certified objects, i.e., it either detects the object or triggers an alert, against any patch hiding attacker within our threat model; 2) in the clean setting, we have almost the same performance as state-of-the-art object detectors. Our evaluation on the PASCAL VOC, MS COCO, and KITTI datasets further demonstrates that DetectorGuard achieves the first provable robustness against localized patch hiding attacks at a negligible cost (< 1%) of clean performance.

Original languageEnglish (US)
Title of host publicationCCS 2021 - Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery
Pages3177-3196
Number of pages20
ISBN (Electronic)9781450384544
DOIs
StatePublished - Nov 12 2021
Event27th ACM Annual Conference on Computer and Communication Security, CCS 2021 - Virtual, Online, Korea, Republic of
Duration: Nov 15 2021Nov 19 2021

Publication series

NameProceedings of the ACM Conference on Computer and Communications Security
ISSN (Print)1543-7221

Conference

Conference27th ACM Annual Conference on Computer and Communication Security, CCS 2021
Country/TerritoryKorea, Republic of
CityVirtual, Online
Period11/15/2111/19/21

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Networks and Communications

Keywords

  • adversarial patch attack
  • object detection
  • provable robustness

Fingerprint

Dive into the research topics of 'DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks'. Together they form a unique fingerprint.

Cite this