Quality Assessment of In-the-Wild Videos

August 01, 2019 ยท Entered Twilight ยท ๐Ÿ› ACM Multimedia

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: ACMMM_2019_supplementary_44, CNNfeatures.py, Framework.jpg, License, Readme.md, VSFA.py, _config.yml, data, models, requirements.txt, test.mp4, test_demo.py

Authors Dingquan Li, Tingting Jiang, Ming Jiang arXiv ID 1908.00375 Category cs.MM: Multimedia Cross-listed cs.CV, eess.IV Citations 375 Venue ACM Multimedia Repository https://github.com/lidq92/VSFA โญ 215 Last Checked 1 month ago
Abstract
Quality assessment of in-the-wild videos is a challenging problem because of the absence of reference videos and shooting distortions. Knowledge of the human visual system can help establish methods for objective quality assessment of in-the-wild videos. In this work, we show two eminent effects of the human visual system, namely, content-dependency and temporal-memory effects, could be used for this purpose. We propose an objective no-reference video quality assessment method by integrating both effects into a deep neural network. For content-dependency, we extract features from a pre-trained image classification neural network for its inherent content-aware property. For temporal-memory effects, long-term dependencies, especially the temporal hysteresis, are integrated into the network with a gated recurrent unit and a subjectively-inspired temporal pooling layer. To validate the performance of our method, experiments are conducted on three publicly available in-the-wild video quality assessment databases: KoNViD-1k, CVD2014, and LIVE-Qualcomm, respectively. Experimental results demonstrate that our proposed method outperforms five state-of-the-art methods by a large margin, specifically, 12.39%, 15.71%, 15.45%, and 18.09% overall performance improvements over the second-best method VBLIINDS, in terms of SROCC, KROCC, PLCC and RMSE, respectively. Moreover, the ablation study verifies the crucial role of both the content-aware features and the modeling of temporal-memory effects. The PyTorch implementation of our method is released at https://github.com/lidq92/VSFA.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Multimedia

R.I.P. ๐Ÿ‘ป Ghosted

Video Generation From Text

Yitong Li, Martin Renqiang Min, ... (+3 more)

cs.MM ๐Ÿ› AAAI ๐Ÿ“š 300 cites 8 years ago