ZeroNVS: Zero-Shot 360-Degree View Synthesis from a Single Image

October 27, 2023 ยท Entered Twilight ยท ๐Ÿ› Computer Vision and Pattern Recognition

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

"No code URL or promise found in abstract"
"Derived repo from GitHub Pages (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, .gitmodules, LICENSE, README.md, configs, docs, launch.py, launch_eval_dtu.sh, launch_eval_mipnerf360.sh, launch_inference.sh, motorcycle.png, requirements-zeronvs.txt, resources.py, threestudio, zeronvs_config.yaml, zeronvs_diffusion, zeronvs_teaser.png

Authors Kyle Sargent, Zizhang Li, Tanmay Shah, Charles Herrmann, Hong-Xing Yu, Yunzhi Zhang, Eric Ryan Chan, Dmitry Lagun, Li Fei-Fei, Deqing Sun, Jiajun Wu arXiv ID 2310.17994 Category cs.CV: Computer Vision Cross-listed cs.GR Citations 87 Venue Computer Vision and Pattern Recognition Repository https://github.com/kylesargent/zeronvs โญ 528 Last Checked 9 days ago
Abstract
We introduce a 3D-aware diffusion model, ZeroNVS, for single-image novel view synthesis for in-the-wild scenes. While existing methods are designed for single objects with masked backgrounds, we propose new techniques to address challenges introduced by in-the-wild multi-object scenes with complex backgrounds. Specifically, we train a generative prior on a mixture of data sources that capture object-centric, indoor, and outdoor scenes. To address issues from data mixture such as depth-scale ambiguity, we propose a novel camera conditioning parameterization and normalization scheme. Further, we observe that Score Distillation Sampling (SDS) tends to truncate the distribution of complex backgrounds during distillation of 360-degree scenes, and propose "SDS anchoring" to improve the diversity of synthesized novel views. Our model sets a new state-of-the-art result in LPIPS on the DTU dataset in the zero-shot setting, even outperforming methods specifically trained on DTU. We further adapt the challenging Mip-NeRF 360 dataset as a new benchmark for single-image novel view synthesis, and demonstrate strong performance in this setting. Our code and data are at http://kylesargent.github.io/zeronvs/
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision