TY - GEN
T1 - Diffusion-SDF
T2 - 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023
AU - Chou, Gene
AU - Bahat, Yuval
AU - Heide, Felix
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Probabilistic diffusion models have achieved state-of-the-art results for image synthesis, inpainting, and text-to-image tasks. However, they are still in the early stages of generating complex 3D shapes. This work proposes Diffusion-SDF, a generative model for shape completion, single-view reconstruction, and reconstruction of real-scanned point clouds. We use neural signed distance functions (SDFs) as our 3D representation to parameterize the geometry of various signals (e.g., point clouds, 2D images) through neural networks. Neural SDFs are implicit functions and diffusing them amounts to learning the reversal of their neural network weights, which we solve using a custom modulation module. Extensive experiments show that our method is capable of both realistic unconditional generation and conditional generation from partial inputs. This work expands the domain of diffusion models from learning 2D, explicit representations, to 3D, implicit representations. Code is released at https://github.com/princeton-computational-imaging/Diffusion-SDF.
AB - Probabilistic diffusion models have achieved state-of-the-art results for image synthesis, inpainting, and text-to-image tasks. However, they are still in the early stages of generating complex 3D shapes. This work proposes Diffusion-SDF, a generative model for shape completion, single-view reconstruction, and reconstruction of real-scanned point clouds. We use neural signed distance functions (SDFs) as our 3D representation to parameterize the geometry of various signals (e.g., point clouds, 2D images) through neural networks. Neural SDFs are implicit functions and diffusing them amounts to learning the reversal of their neural network weights, which we solve using a custom modulation module. Extensive experiments show that our method is capable of both realistic unconditional generation and conditional generation from partial inputs. This work expands the domain of diffusion models from learning 2D, explicit representations, to 3D, implicit representations. Code is released at https://github.com/princeton-computational-imaging/Diffusion-SDF.
UR - http://www.scopus.com/inward/record.url?scp=85178968630&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85178968630&partnerID=8YFLogxK
U2 - 10.1109/ICCV51070.2023.00215
DO - 10.1109/ICCV51070.2023.00215
M3 - Conference contribution
AN - SCOPUS:85178968630
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 2262
EP - 2272
BT - Proceedings - 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 2 October 2023 through 6 October 2023
ER -