Analyzing the robustness of open-world machine learning

Vikash Sehwag, Chawin Sitawarin, Arjun Nitin Bhagoji, Daniel Cullina, Prateek Mittal, Liwei Song, Mung Chiang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

When deploying machine learning models in real-world applications, an open-world learning framework is needed to deal with both normal in-distribution inputs and undesired out-of-distribution (OOD) inputs. Open-world learning frameworks include OOD detectors that aim to discard input examples which are not from the same distribution as the training data of machine learning classifiers. However, our understanding of current OOD detectors is limited to the setting of benign OOD data, and an open question is whether they are robust in the presence of adversaries. In this paper, we present the first analysis of the robustness of open-world learning frameworks in the presence of adversaries by introducing and designing OOD adversarial examples. Our experimental results show that current OOD detectors can be easily evaded by slightly perturbing benign OOD inputs, revealing a severe limitation of current open-world learning frameworks. Furthermore, we find that OOD adversarial examples also pose a strong threat to adversarial training based defense methods in spite of their effectiveness against in-distribution adversarial attacks. To counteract these threats and ensure the trustworthy detection of OOD inputs, we outline a preliminary design for a robust open-world machine learning framework.

Original languageEnglish (US)
Title of host publicationAISec 2019 - Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security
PublisherAssociation for Computing Machinery
Pages105-116
Number of pages12
ISBN (Electronic)9781450368339
DOIs
StatePublished - Nov 11 2019
Event12th ACM Workshop on Artificial Intelligence and Security, AISec 2019, co-located with CCS 2019 - London, United Kingdom
Duration: Nov 15 2019 → …

Publication series

NameProceedings of the ACM Conference on Computer and Communications Security
ISSN (Print)1543-7221

Conference

Conference12th ACM Workshop on Artificial Intelligence and Security, AISec 2019, co-located with CCS 2019
CountryUnited Kingdom
CityLondon
Period11/15/19 → …

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Networks and Communications

Keywords

  • Adversarial example
  • Deep learning
  • Open world recognition

Fingerprint Dive into the research topics of 'Analyzing the robustness of open-world machine learning'. Together they form a unique fingerprint.

  • Cite this

    Sehwag, V., Sitawarin, C., Bhagoji, A. N., Cullina, D., Mittal, P., Song, L., & Chiang, M. (2019). Analyzing the robustness of open-world machine learning. In AISec 2019 - Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security (pp. 105-116). (Proceedings of the ACM Conference on Computer and Communications Security). Association for Computing Machinery. https://doi.org/10.1145/3338501.335737