TY - JOUR
T1 - SymmetryNet
T2 - Learning to predict reflectional and rotational symmetries of 3D shapes from single-view RGB-D images
AU - Shi, Yifei
AU - Huang, Junwen
AU - Zhang, Hongjia
AU - Xu, Xin
AU - Rusinkiewicz, Szymon
AU - Xu, Kai
N1 - Publisher Copyright:
© 2020 Association for Computing Machinery.
PY - 2020/11/26
Y1 - 2020/11/26
N2 - We study the problem of symmetry detection of 3D shapes from single-view RGB-D images, where severely missing data renders geometric detection approach infeasible. We propose an end-to-end deep neural network which is able to predict both reflectional and rotational symmetries of 3D objects present in the input RGB-D image. Directly training a deep model for symmetry prediction, however, can quickly run into the issue of overfitting. We adopt a multi-task learning approach. Aside from symmetry axis prediction, our network is also trained to predict symmetry correspondences. In particular, given the 3D points present in the RGB-D image, our network outputs for each 3D point its symmetric counterpart corresponding to a specific predicted symmetry. In addition, our network is able to detect for a given shape multiple symmetries of different types. We also contribute a benchmark of 3D symmetry detection based on single-view RGB-D images. Extensive evaluation on the benchmark demonstrates the strong generalization ability of our method, in terms of high accuracy of both symmetry axis prediction and counterpart estimation. In particular, our method is robust in handling unseen object instances with large variation in shape, multi-symmetry composition, as well as novel object categories.
AB - We study the problem of symmetry detection of 3D shapes from single-view RGB-D images, where severely missing data renders geometric detection approach infeasible. We propose an end-to-end deep neural network which is able to predict both reflectional and rotational symmetries of 3D objects present in the input RGB-D image. Directly training a deep model for symmetry prediction, however, can quickly run into the issue of overfitting. We adopt a multi-task learning approach. Aside from symmetry axis prediction, our network is also trained to predict symmetry correspondences. In particular, given the 3D points present in the RGB-D image, our network outputs for each 3D point its symmetric counterpart corresponding to a specific predicted symmetry. In addition, our network is able to detect for a given shape multiple symmetries of different types. We also contribute a benchmark of 3D symmetry detection based on single-view RGB-D images. Extensive evaluation on the benchmark demonstrates the strong generalization ability of our method, in terms of high accuracy of both symmetry axis prediction and counterpart estimation. In particular, our method is robust in handling unseen object instances with large variation in shape, multi-symmetry composition, as well as novel object categories.
KW - counterpart prediction
KW - neural networks
KW - symmetry prediction
UR - http://www.scopus.com/inward/record.url?scp=85097355579&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85097355579&partnerID=8YFLogxK
U2 - 10.1145/3414685.3417775
DO - 10.1145/3414685.3417775
M3 - Article
AN - SCOPUS:85097355579
SN - 0730-0301
VL - 39
JO - ACM Transactions on Graphics
JF - ACM Transactions on Graphics
IS - 6
M1 - 213
ER -