Searching for a Search Method: Benchmarking Search Algorithms for Generating NLP Adversarial Examples
September 09, 2020 ยท Entered Twilight ยท ๐ BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
"Last commit was 5.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: .gitignore, README.md, autoevaluation.ipynb, figures.ipynb, figures, grid_run.py, recipes, results, run_experiment.py
Authors
Jin Yong Yoo, John X. Morris, Eli Lifland, Yanjun Qi
arXiv ID
2009.06368
Category
cs.CL: Computation & Language
Cross-listed
cs.AI,
cs.CR,
cs.LG
Citations
55
Venue
BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Repository
https://github.com/QData/TextAttack-Search-Benchmark
โญ 26
Last Checked
1 month ago
Abstract
We study the behavior of several black-box search algorithms used for generating adversarial examples for natural language processing (NLP) tasks. We perform a fine-grained analysis of three elements relevant to search: search algorithm, search space, and search budget. When new search algorithms are proposed in past work, the attack search space is often modified alongside the search algorithm. Without ablation studies benchmarking the search algorithm change with the search space held constant, one cannot tell if an increase in attack success rate is a result of an improved search algorithm or a less restrictive search space. Additionally, many previous studies fail to properly consider the search algorithms' run-time cost, which is essential for downstream tasks like adversarial training. Our experiments provide a reproducible benchmark of search algorithms across a variety of search spaces and query budgets to guide future research in adversarial NLP. Based on our experiments, we recommend greedy attacks with word importance ranking when under a time constraint or attacking long inputs, and either beam search or particle swarm optimization otherwise. Code implementation shared via https://github.com/QData/TextAttack-Search-Benchmark
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted