Second-Order NLP Adversarial Examples

October 05, 2020 ยท Entered Twilight ยท ๐Ÿ› BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: README.md, figs, s3_constraint_robustness_curve_demo.ipynb, s5_attack_mr.ipynb, s5_attack_snli.ipynb, s5_attack_sst2.ipynb, s99_app_s2_constraints_paws.ipynb, s99_app_s2_constraints_qqp.ipynb, s99_app_s2_testing_constraints_adversarial_snli.ipynb

Authors John X. Morris arXiv ID 2010.01770 Category cs.CL: Computation & Language Citations 0 Venue BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP Repository https://github.com/jxmorris12/second-order-adversarial-examples โญ 5 Last Checked 2 months ago
Abstract
Adversarial example generation methods in NLP rely on models like language models or sentence encoders to determine if potential adversarial examples are valid. In these methods, a valid adversarial example fools the model being attacked, and is determined to be semantically or syntactically valid by a second model. Research to date has counted all such examples as errors by the attacked model. We contend that these adversarial examples may not be flaws in the attacked model, but flaws in the model that determines validity. We term such invalid inputs second-order adversarial examples. We propose the constraint robustness curve and associated metric ACCS as tools for evaluating the robustness of a constraint to second-order adversarial examples. To generate this curve, we design an adversarial attack to run directly on the semantic similarity models. We test on two constraints, the Universal Sentence Encoder (USE) and BERTScore. Our findings indicate that such second-order examples exist, but are typically less common than first-order adversarial examples in state-of-the-art models. They also indicate that USE is effective as constraint on NLP adversarial examples, while BERTScore is nearly ineffectual. Code for running the experiments in this paper is available at https://github.com/jxmorris12/second-order-adversarial-examples.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago