TestAug: A Framework for Augmenting Capability-based NLP Tests

October 14, 2022 ยท Entered Twilight ยท ๐Ÿ› International Conference on Computational Linguistics

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitignore, README.md, classifier, dataset, pipeline, reproduce.sh, setting, setup.py, testaug, utils

Authors Guanqun Yang, Mirazul Haque, Qiaochu Song, Wei Yang, Xueqing Liu arXiv ID 2210.08097 Category cs.SE: Software Engineering Cross-listed cs.AI, cs.CL, cs.LG Citations 0 Venue International Conference on Computational Linguistics Repository https://github.com/guanqun-yang/testaug โญ 5 Last Checked 1 month ago
Abstract
The recently proposed capability-based NLP testing allows model developers to test the functional capabilities of NLP models, revealing functional failures that cannot be detected by the traditional heldout mechanism. However, existing work on capability-based testing requires extensive manual efforts and domain expertise in creating the test cases. In this paper, we investigate a low-cost approach for the test case generation by leveraging the GPT-3 engine. We further propose to use a classifier to remove the invalid outputs from GPT-3 and expand the outputs into templates to generate more test cases. Our experiments show that TestAug has three advantages over the existing work on behavioral testing: (1) TestAug can find more bugs than existing work; (2) The test cases in TestAug are more diverse; and (3) TestAug largely saves the manual efforts in creating the test suites. The code and data for TestAug can be found at our project website (https://guanqun-yang.github.io/testaug/) and GitHub (https://github.com/guanqun-yang/testaug).
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Software Engineering