MeshMVS: Multi-View Stereo Guided Mesh Reconstruction

October 17, 2020 ยท Entered Twilight ยท ๐Ÿ› International Conference on 3D Vision

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"No code URL or promise found in abstract"
"Code repo scraped from project page (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: .flake8, .gitignore, .gitmodules, CODE_OF_CONDUCT.md, CONTRIBUTING.md, INSTRUCTIONS_PIX3D.md, INSTRUCTIONS_SHAPENET.md, LICENSE, README.md, configs, datasets, demo, external, infra, meshrcnn, requirements.txt, setup.cfg, setup.py, shapenet, tools

Authors Rakesh Shrestha, Zhiwen Fan, Qingkun Su, Zuozhuo Dai, Siyu Zhu, Ping Tan arXiv ID 2010.08682 Category cs.CV: Computer Vision Cross-listed cs.LG, eess.IV Citations 11 Venue International Conference on 3D Vision Repository https://github.com/rakeshshrestha31/meshmvs.git โญ 9 Last Checked 29 days ago
Abstract
Deep learning based 3D shape generation methods generally utilize latent features extracted from color images to encode the semantics of objects and guide the shape generation process. These color image semantics only implicitly encode 3D information, potentially limiting the accuracy of the generated shapes. In this paper we propose a multi-view mesh generation method which incorporates geometry information explicitly by using the features from intermediate depth representations of multi-view stereo and regularizing the 3D shapes against these depth images. First, our system predicts a coarse 3D volume from the color images by probabilistically merging voxel occupancy grids from the prediction of individual views. Then the depth images from multi-view stereo along with the rendered depth images of the coarse shape are used as a contrastive input whose features guide the refinement of the coarse shape through a series of graph convolution networks. Notably, we achieve superior results than state-of-the-art multi-view shape generation methods with 34% decrease in Chamfer distance to ground truth and 14% increase in F1-score on ShapeNet dataset.Our source code is available at https://git.io/Jmalg
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision