Personalized Showcases: Generating Multi-Modal Explanations for Recommendations

June 30, 2022 · Declared Dead · 🏛 Annual International ACM SIGIR Conference on Research and Development in Information Retrieval

⚰️ CAUSE OF DEATH: The Empty Tomb
GitHub repo is empty
Authors An Yan, Zhankui He, Jiacheng Li, Tianyang Zhang, Julian McAuley arXiv ID 2207.00422 Category cs.IR: Information Retrieval Cross-listed cs.AI, cs.CV Citations 58 Venue Annual International ACM SIGIR Conference on Research and Development in Information Retrieval Repository https://github.com/zzxslp/Gest ⭐ 11 Last Checked 1 month ago
Abstract
Existing explanation models generate only text for recommendations but still struggle to produce diverse contents. In this paper, to further enrich explanations, we propose a new task named personalized showcases, in which we provide both textual and visual information to explain our recommendations. Specifically, we first select a personalized image set that is the most relevant to a user's interest toward a recommended item. Then, natural language explanations are generated accordingly given our selected images. For this new task, we collect a large-scale dataset from Google Local (i.e.,~maps) and construct a high-quality subset for generating multi-modal explanations. We propose a personalized multi-modal framework which can generate diverse and visually-aligned explanations via contrastive learning. Experiments show that our framework benefits from different modalities as inputs, and is able to produce more diverse and expressive explanations compared to previous methods on a variety of evaluation metrics.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

📜 Similar Papers

In the same crypt — Information Retrieval

Died the same way — ⚰️ The Empty Tomb