Act-Aware Slot-Value Predicting in Multi-Domain Dialogue State Tracking
August 04, 2022 ยท Entered Twilight ยท ๐ Interspeech
Repo contents: .gitignore, README.md, act_aware_dst, calc_elmo.sh, calc_elmo_embeddings.py, config_nosp_act.jsonnet, config_sp_act.jsonnet, create_data.py, formulate_pred_belief_state.py, multiwoz_format_act.py, ontology, plot_att.ipynb, predict_act.sh, requirements.txt, train_nosp_act.sh, train_sp_act.sh
Authors
Ruolin Su, Ting-Wei Wu, Biing-Hwang Juang
arXiv ID
2208.02462
Category
cs.CL: Computation & Language
Cross-listed
cs.AI
Citations
5
Venue
Interspeech
Repository
https://github.com/youlandasu/ACT-AWARE-DST
โญ 4
Last Checked
1 month ago
Abstract
As an essential component in task-oriented dialogue systems, dialogue state tracking (DST) aims to track human-machine interactions and generate state representations for managing the dialogue. Representations of dialogue states are dependent on the domain ontology and the user's goals. In several task-oriented dialogues with a limited scope of objectives, dialogue states can be represented as a set of slot-value pairs. As the capabilities of dialogue systems expand to support increasing naturalness in communication, incorporating dialogue act processing into dialogue model design becomes essential. The lack of such consideration limits the scalability of dialogue state tracking models for dialogues having specific objectives and ontology. To address this issue, we formulate and incorporate dialogue acts, and leverage recent advances in machine reading comprehension to predict both categorical and non-categorical types of slots for multi-domain dialogue state tracking. Experimental results show that our models can improve the overall accuracy of dialogue state tracking on the MultiWOZ 2.1 dataset, and demonstrate that incorporating dialogue acts can guide dialogue state design for future task-oriented dialogue systems.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Computation & Language
๐
๐
Old Age
๐
๐
Old Age
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
RoBERTa: A Robustly Optimized BERT Pretraining Approach
R.I.P.
๐ป
Ghosted
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
R.I.P.
๐ป
Ghosted