Interpretable Knowledge Tracing via Response Influence-based Counterfactual Reasoning

December 01, 2023 ยท Entered Twilight ยท ๐Ÿ› IEEE International Conference on Data Engineering

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: BaseModel.py, README.md, Runner.py, Training.py, data, metric.py, models, utils.py

Authors Jiajun Cui, Minghe Yu, Bo Jiang, Aimin Zhou, Jianyong Wang, Wei Zhang arXiv ID 2312.10045 Category cs.CY: Computers & Society Cross-listed cs.AI, cs.LG Citations 9 Venue IEEE International Conference on Data Engineering Repository https://github.com/JJCui96/RCKT โญ 5 Last Checked 1 month ago
Abstract
Knowledge tracing (KT) plays a crucial role in computer-aided education and intelligent tutoring systems, aiming to assess students' knowledge proficiency by predicting their future performance on new questions based on their past response records. While existing deep learning knowledge tracing (DLKT) methods have significantly improved prediction accuracy and achieved state-of-the-art results, they often suffer from a lack of interpretability. To address this limitation, current approaches have explored incorporating psychological influences to achieve more explainable predictions, but they tend to overlook the potential influences of historical responses. In fact, understanding how models make predictions based on response influences can enhance the transparency and trustworthiness of the knowledge tracing process, presenting an opportunity for a new paradigm of interpretable KT. However, measuring unobservable response influences is challenging. In this paper, we resort to counterfactual reasoning that intervenes in each response to answer \textit{what if a student had answered a question incorrectly that he/she actually answered correctly, and vice versa}. Based on this, we propose RCKT, a novel response influence-based counterfactual knowledge tracing framework. RCKT generates response influences by comparing prediction outcomes from factual sequences and constructed counterfactual sequences after interventions. Additionally, we introduce maximization and inference techniques to leverage accumulated influences from different past responses, further improving the model's performance and credibility. Extensive experimental results demonstrate that our RCKT method outperforms state-of-the-art knowledge tracing methods on four datasets against six baselines, and provides credible interpretations of response influences.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computers & Society

R.I.P. ๐Ÿ‘ป Ghosted

Green AI

Roy Schwartz, Jesse Dodge, ... (+2 more)

cs.CY ๐Ÿ› arXiv ๐Ÿ“š 1.5K cites 6 years ago