TY - GEN
T1 - Poster
T2 - 24th ACM SIGSAC Conference on Computer and Communications Security, CCS 2017
AU - Song, Liwei
AU - Mittal, Prateek
N1 - Funding Information:
This work was supported in part by NSF awards CNS-1553437, EARS-1642962 and CNS-1409415.
PY - 2017/10/30
Y1 - 2017/10/30
N2 - Voice assistants like Siri enable us to control IoT devices conveniently with voice commands, however, they also provide new attack opportunities for adversaries. Previous papers attack voice assistants with obfuscated voice commands by leveraging the gap between speech recognition system and human voice perception. The limitation is that these obfuscated commands are audible and thus conspicuous to device owners. In this poster, we propose a novel mechanism to directly attack the microphone used for sensing voice data with inaudible voice commands. We show that the adversary can exploit the microphone's non-linearity and play welldesigned inaudible ultrasounds to cause the microphone to record normal voice commands, and thus control the victim device inconspicuously. We demonstrate via end-to-end real-world experiments that our inaudible voice commands can attack an Android phone and an Amazon Echo device with high success rates at a range of 2-3 meters.
AB - Voice assistants like Siri enable us to control IoT devices conveniently with voice commands, however, they also provide new attack opportunities for adversaries. Previous papers attack voice assistants with obfuscated voice commands by leveraging the gap between speech recognition system and human voice perception. The limitation is that these obfuscated commands are audible and thus conspicuous to device owners. In this poster, we propose a novel mechanism to directly attack the microphone used for sensing voice data with inaudible voice commands. We show that the adversary can exploit the microphone's non-linearity and play welldesigned inaudible ultrasounds to cause the microphone to record normal voice commands, and thus control the victim device inconspicuously. We demonstrate via end-to-end real-world experiments that our inaudible voice commands can attack an Android phone and an Amazon Echo device with high success rates at a range of 2-3 meters.
KW - Inaudible ultrasound injection
KW - Intermodulation distortion
KW - Microphone
KW - Non-linearity
UR - http://www.scopus.com/inward/record.url?scp=85041431278&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85041431278&partnerID=8YFLogxK
U2 - 10.1145/3133956.3138836
DO - 10.1145/3133956.3138836
M3 - Conference contribution
AN - SCOPUS:85041431278
T3 - Proceedings of the ACM Conference on Computer and Communications Security
SP - 2583
EP - 2585
BT - CCS 2017 - Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security
PB - Association for Computing Machinery
Y2 - 30 October 2017 through 3 November 2017
ER -