Real-Time Adversarial Attacks

May 31, 2019 ยท Entered Twilight ยท ๐Ÿ› International Joint Conference on Artificial Intelligence

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: README.md, concatenate_fixed_scale.m, conv_labels.txt, differential_evolution.py, generate_expert_demo.py, inject_attack.py, rnn_model.py, speech_model_train, tensorflow-tensorflow-examples.zip, train_attack.py

Authors Yuan Gong, Boyang Li, Christian Poellabauer, Yiyu Shi arXiv ID 1905.13399 Category cs.CR: Cryptography & Security Cross-listed cs.LG, cs.SD, eess.AS Citations 58 Venue International Joint Conference on Artificial Intelligence Repository https://github.com/YuanGongND/realtime-adversarial-attack โญ 20 Last Checked 1 month ago
Abstract
In recent years, many efforts have demonstrated that modern machine learning algorithms are vulnerable to adversarial attacks, where small, but carefully crafted, perturbations on the input can make them fail. While these attack methods are very effective, they only focus on scenarios where the target model takes static input, i.e., an attacker can observe the entire original sample and then add a perturbation at any point of the sample. These attack approaches are not applicable to situations where the target model takes streaming input, i.e., an attacker is only able to observe past data points and add perturbations to the remaining (unobserved) data points of the input. In this paper, we propose a real-time adversarial attack scheme for machine learning models with streaming inputs.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Cryptography & Security