Residual MeshNet: Learning to deform meshes for single-view 3D reconstruction
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings - 2018 International Conference on 3D Vision, 3DV 2018, 2018, pp. 719 - 727
- Issue Date:
- 2018-10-12
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Residual MeshNet - Learning to deform meshes for single-view 3D reconstruction.pdf | Published version | 1.68 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2018 IEEE. This work presents a novel architecture of deep neural networks to generate meshes approximating the surface of a 3D object from a single image. Compared to existing learning-based 3D reconstruction models, our architecture is characterized by (1) deep mesh deformation stacks with residual network design, where a simple mesh is transformed to approximate the target surface and undergoes multiple deformation steps to progressively refine the result and reduce the residuals, and (2) parallel paths per deformation step, which can exponentially enrich the generated meshes using deeper structure and more model parameters. We also propose novel regularization scheme that encourages the meshes to be both globally complementary to cover the target surface and locally consistent with each other. Empirical evaluation on benchmark datasets show advantage of the proposed architecture over existing methods.
Please use this identifier to cite or link to this item: