Learning more expressive joint distributions in multimodal variational methods

September 08, 2020 ยท Declared Dead ยท ๐Ÿ› International Conference on Machine Learning, Optimization, and Data Science

๐Ÿ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors Sasho Nedelkoski, Mihail Bogojeski, Odej Kao arXiv ID 2009.03651 Category cs.LG: Machine Learning Cross-listed cs.AI, cs.CV, stat.ML Citations 1 Venue International Conference on Machine Learning, Optimization, and Data Science Repository https://github.com/SashoNedelkoski/BPFDMVM Last Checked 1 month ago
Abstract
Data often are formed of multiple modalities, which jointly describe the observed phenomena. Modeling the joint distribution of multimodal data requires larger expressive power to capture high-level concepts and provide better data representations. However, multimodal generative models based on variational inference are limited due to the lack of flexibility of the approximate posterior, which is obtained by searching within a known parametric family of distributions. We introduce a method that improves the representational capacity of multimodal variational methods using normalizing flows. It approximates the joint posterior with a simple parametric distribution and subsequently transforms into a more complex one. Through several experiments, we demonstrate that the model improves on state-of-the-art multimodal methods based on variational inference on various computer vision tasks such as colorization, edge and mask detection, and weakly supervised learning. We also show that learning more powerful approximate joint distributions improves the quality of the generated samples. The code of our model is publicly available at https://github.com/SashoNedelkoski/BPFDMVM.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning

Died the same way โ€” ๐Ÿ’€ 404 Not Found