TY - GEN
T1 - Self-Destructing Models
T2 - 2023 AAAI / ACM Conference on Artificial Intelligence, Ethics, and Society, AIES 2023
AU - Henderson, Peter
AU - Mitchell, Eric
AU - Manning, Christopher
AU - Jurafsky, Dan
AU - Finn, Chelsea
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/8/8
Y1 - 2023/8/8
N2 - A growing ecosystem of large, open-source foundation models has reduced the labeled data and technical expertise necessary to apply machine learning to many new problems. Yet foundation models pose a clear dual-use risk, indiscriminately reducing the costs of building both harmful and beneficial machine learning systems. Policy tools such as restricted model access and export controls are the primary methods currently used to mitigate such dual-use risks. In this work, we review potential safe-release strategies and argue that both policymakers and AI researchers would benefit from fundamentally new technologies enabling more precise control over the downstream usage of open-source foundation models. We propose one such approach: the task blocking paradigm, in which foundation models are trained with an additional mechanism to impede adaptation to harmful tasks without sacrificing performance on desirable tasks. We call the resulting models self-destructing models, inspired by mechanisms that prevent adversaries from using tools for harmful purposes. We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning, which we call meta-learned adversarial censoring (MLAC). In a small-scale experiment, we show MLAC can largely prevent a BERT-style model from being re-purposed to perform gender identification without harming the model's ability to perform profession classification.
AB - A growing ecosystem of large, open-source foundation models has reduced the labeled data and technical expertise necessary to apply machine learning to many new problems. Yet foundation models pose a clear dual-use risk, indiscriminately reducing the costs of building both harmful and beneficial machine learning systems. Policy tools such as restricted model access and export controls are the primary methods currently used to mitigate such dual-use risks. In this work, we review potential safe-release strategies and argue that both policymakers and AI researchers would benefit from fundamentally new technologies enabling more precise control over the downstream usage of open-source foundation models. We propose one such approach: the task blocking paradigm, in which foundation models are trained with an additional mechanism to impede adaptation to harmful tasks without sacrificing performance on desirable tasks. We call the resulting models self-destructing models, inspired by mechanisms that prevent adversaries from using tools for harmful purposes. We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning, which we call meta-learned adversarial censoring (MLAC). In a small-scale experiment, we show MLAC can largely prevent a BERT-style model from being re-purposed to perform gender identification without harming the model's ability to perform profession classification.
UR - http://www.scopus.com/inward/record.url?scp=85173618687&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85173618687&partnerID=8YFLogxK
U2 - 10.1145/3600211.3604690
DO - 10.1145/3600211.3604690
M3 - Conference contribution
AN - SCOPUS:85173618687
T3 - AIES 2023 - Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
SP - 287
EP - 296
BT - AIES 2023 - Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
PB - Association for Computing Machinery, Inc
Y2 - 8 August 2023 through 10 August 2023
ER -