TY - GEN
T1 - Learning to Infer Semantic Parameters for 3D Shape Editing
AU - Wei, Fangyin
AU - Sizikova, Elena
AU - Sud, Avneesh
AU - Rusinkiewicz, Szymon
AU - Funkhouser, Thomas
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/11
Y1 - 2020/11
N2 - Many applications in 3D shape design and augmentation require the ability to make specific edits to an object's semantic parameters (e.g., the pose of a person's arm or the length of an airplane's wing) while preserving as much existing details as possible. We propose to learn a deep network that infers the semantic parameters of an input shape and then allows the user to manipulate those parameters. The network is trained jointly on shapes from an auxiliary synthetic template and unlabeled realistic models, ensuring robustness to shape variability while relieving the need to label realistic exemplars. At testing time, edits within the parameter space drive deformations to be applied to the original shape, which provides semantically-meaningful manipulation while preserving the details. This is in contrast to prior methods that either use autoencoders with a limited latent-space dimensionality, failing to preserve arbitrary detail, or drive deformations with purely-geometric controls, such as cages, losing the ability to update local part regions. Experiments with datasets of chairs, airplanes, and human bodies demonstrate that our method produces more natural edits than prior work.
AB - Many applications in 3D shape design and augmentation require the ability to make specific edits to an object's semantic parameters (e.g., the pose of a person's arm or the length of an airplane's wing) while preserving as much existing details as possible. We propose to learn a deep network that infers the semantic parameters of an input shape and then allows the user to manipulate those parameters. The network is trained jointly on shapes from an auxiliary synthetic template and unlabeled realistic models, ensuring robustness to shape variability while relieving the need to label realistic exemplars. At testing time, edits within the parameter space drive deformations to be applied to the original shape, which provides semantically-meaningful manipulation while preserving the details. This is in contrast to prior methods that either use autoencoders with a limited latent-space dimensionality, failing to preserve arbitrary detail, or drive deformations with purely-geometric controls, such as cages, losing the ability to update local part regions. Experiments with datasets of chairs, airplanes, and human bodies demonstrate that our method produces more natural edits than prior work.
KW - 3D Shape Editing
KW - Deep Learning
UR - http://www.scopus.com/inward/record.url?scp=85101481174&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85101481174&partnerID=8YFLogxK
U2 - 10.1109/3DV50981.2020.00053
DO - 10.1109/3DV50981.2020.00053
M3 - Conference contribution
AN - SCOPUS:85101481174
T3 - Proceedings - 2020 International Conference on 3D Vision, 3DV 2020
SP - 434
EP - 442
BT - Proceedings - 2020 International Conference on 3D Vision, 3DV 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 8th International Conference on 3D Vision, 3DV 2020
Y2 - 25 November 2020 through 28 November 2020
ER -