Multi-domain Dialogue State Tracking as Dynamic Knowledge Graph Enhanced Question Answering

November 07, 2019 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: LICENSE, NOTICE, README.md, THIRD-PARTY, calc_elmo.sh, calc_elmo_embeddings.py, config_nosp.jsonnet, config_sp.jsonnet, dstqa, formulate_pred_belief_state.py, multiwoz_2.1_format.py, multiwoz_format.py, ontology, predict.sh, requirements.txt, train_nosp.sh, train_sp.sh

Authors Li Zhou, Kevin Small arXiv ID 1911.06192 Category cs.CL: Computation & Language Cross-listed cs.AI, cs.LG, stat.ML Citations 90 Venue arXiv.org Repository https://github.com/alexa/dstqa โญ 33 Last Checked 1 month ago
Abstract
Multi-domain dialogue state tracking (DST) is a critical component for conversational AI systems. The domain ontology (i.e., specification of domains, slots, and values) of a conversational AI system is generally incomplete, making the capability for DST models to generalize to new slots, values, and domains during inference imperative. In this paper, we propose to model multi-domain DST as a question answering problem, referred to as Dialogue State Tracking via Question Answering (DSTQA). Within DSTQA, each turn generates a question asking for the value of a (domain, slot) pair, thus making it naturally extensible to unseen domains, slots, and values. Additionally, we use a dynamically-evolving knowledge graph to explicitly learn relationships between (domain, slot) pairs. Our model has a 5.80% and 12.21% relative improvement over the current state-of-the-art model on MultiWOZ 2.0 and MultiWOZ 2.1 datasets, respectively. Additionally, our model consistently outperforms the state-of-the-art model in domain adaptation settings. (Code is released at https://github.com/alexa/dstqa )
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago