On the Effectiveness of Low-rank Approximations for Collaborative Filtering compared to Neural Networks

May 30, 2019 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .coveragerc, .gitignore, AUTHORS.rst, CHANGELOG.rst, LICENSE.txt, README.md, docs, environment.yaml, experiments, requirements.txt, setup.cfg, setup.py, src, tests

Authors Marcel Kurovski, Florian Wilhelm arXiv ID 1905.12967 Category cs.IR: Information Retrieval Citations 0 Venue arXiv.org Repository https://github.com/FlorianWilhelm/lrann โญ 7 Last Checked 2 months ago
Abstract
Even in times of deep learning, low-rank approximations by factorizing a matrix into user and item latent factors continue to be a method of choice for collaborative filtering tasks due to their great performance. While deep learning based approaches excel in hybrid recommender tasks where additional features for items, users or even context are available, their flexibility seems to rather impair the performance compared to low-rank approximations for pure collaborative filtering tasks where no additional features are used. Recent works propose hybrid models combining low-rank approximations and traditional deep neural architectures with promising results but fail to explain why neural networks alone are unsuitable for this task. In this work, we revisit the model and intuition behind low-rank approximation to point out its suitability for collaborative filtering tasks. In several experiments we compare the performance and behavior of models based on a deep neural network and low-rank approximation to examine the reasons for the low effectiveness of traditional deep neural networks. We conclude that the universal approximation capabilities of traditional deep neural networks severely impair the determination of suitable latent vectors, leading to a worse performance compared to low-rank approximations.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Information Retrieval