Dynamic Multimodal Fusion via Meta-Learning Towards Micro-Video Recommendation

January 13, 2025 ยท Entered Twilight ยท ๐Ÿ› ACM Trans. Inf. Syst.

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: Fusion_model.py, GCN_model.py, LICENSE, README.md, cpd_layer.py, data_load.py, data_triple.py, dataset_sample, meta_layer.py, model_test.py, model_train.py

Authors Han Liu, Yinwei Wei, Fan Liu, Wenjie Wang, Liqiang Nie, Tat-Seng Chua arXiv ID 2501.07110 Category cs.CV: Computer Vision Cross-listed cs.IR, cs.MM Citations 37 Venue ACM Trans. Inf. Syst. Repository https://github.com/hanliu95/MetaMMF โญ 5 Last Checked 1 month ago
Abstract
Multimodal information (e.g., visual, acoustic, and textual) has been widely used to enhance representation learning for micro-video recommendation. For integrating multimodal information into a joint representation of micro-video, multimodal fusion plays a vital role in the existing micro-video recommendation approaches. However, the static multimodal fusion used in previous studies is insufficient to model the various relationships among multimodal information of different micro-videos. In this paper, we develop a novel meta-learning-based multimodal fusion framework called Meta Multimodal Fusion (MetaMMF), which dynamically assigns parameters to the multimodal fusion function for each micro-video during its representation learning. Specifically, MetaMMF regards the multimodal fusion of each micro-video as an independent task. Based on the meta information extracted from the multimodal features of the input task, MetaMMF parameterizes a neural network as the item-specific fusion function via a meta learner. We perform extensive experiments on three benchmark datasets, demonstrating the significant improvements over several state-of-the-art multimodal recommendation models, like MMGCN, LATTICE, and InvRL. Furthermore, we lighten our model by adopting canonical polyadic decomposition to improve the training efficiency, and validate its effectiveness through experimental results. Codes are available at https://github.com/hanliu95/MetaMMF.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision