Deep Fully-Connected Networks for Video Compressive Sensing

March 16, 2016 ยท Entered Twilight ยท ๐Ÿ› Digit. Signal Process.

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, README.md, datasets, download, imgs, layers, models, test.py, train.py, utils

Authors Michael Iliadis, Leonidas Spinoulas, Aggelos K. Katsaggelos arXiv ID 1603.04930 Category cs.CV: Computer Vision Cross-listed cs.LG, cs.MM Citations 199 Venue Digit. Signal Process. Repository https://github.com/miliadis/DeepVideoCS โญ 80 Last Checked 1 month ago
Abstract
In this work we present a deep learning framework for video compressive sensing. The proposed formulation enables recovery of video frames in a few seconds at significantly improved reconstruction quality compared to previous approaches. Our investigation starts by learning a linear mapping between video sequences and corresponding measured frames which turns out to provide promising results. We then extend the linear formulation to deep fully-connected networks and explore the performance gains using deeper architectures. Our analysis is always driven by the applicability of the proposed framework on existing compressive video architectures. Extensive simulations on several video sequences document the superiority of our approach both quantitatively and qualitatively. Finally, our analysis offers insights into understanding how dataset sizes and number of layers affect reconstruction performance while raising a few points for future investigation. Code is available at Github: https://github.com/miliadis/DeepVideoCS
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision