Evaluation and Improvement of Chatbot Text Classification Data Quality Using Plausible Negative Examples

June 05, 2019 ยท Entered Twilight ยท ๐Ÿ› Proceedings of the First Workshop on NLP for Conversational AI

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: LICENSE, README.md, nex_cv

Authors Kit Kuksenok, Andriy Martyniv arXiv ID 1906.01910 Category cs.IR: Information Retrieval Cross-listed cs.CL, cs.LG Citations 5 Venue Proceedings of the First Workshop on NLP for Conversational AI Repository https://github.com/jobpal/nex-cv โญ 2 Last Checked 1 month ago
Abstract
We describe and validate a metric for estimating multi-class classifier performance based on cross-validation and adapted for improvement of small, unbalanced natural-language datasets used in chatbot design. Our experiences draw upon building recruitment chatbots that mediate communication between job-seekers and recruiters by exposing the ML/NLP dataset to the recruiting team. Evaluation approaches must be understandable to various stakeholders, and useful for improving chatbot performance. The metric, nex-cv, uses negative examples in the evaluation of text classification, and fulfils three requirements. First, it is actionable: it can be used by non-developer staff. Second, it is not overly optimistic compared to human ratings, making it a fast method for comparing classifiers. Third, it allows model-agnostic comparison, making it useful for comparing systems despite implementation differences. We validate the metric based on seven recruitment-domain datasets in English and German over the course of one year.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Information Retrieval