PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features

Published in TPAMI, 2021

Point cloud analysis without pose priors is very challenging in real applications, as the orientations of point clouds are often unknown. In this paper, we propose a brand new point-set learning framework PRIN, namely, Point-wise Rotation Invariant Network, focusing on rotation invariant feature extraction in point clouds analysis. We construct spherical signals by Density Aware Adaptive Sampling to deal with distorted point distributions in spherical space. Spherical Voxel Convolution and Point Re-sampling are proposed to extract rotation invariant features for each point. In addition, we extend PRIN to a sparse version called SPRIN, which directly operates on sparse point clouds. Both PRIN and SPRIN can be applied to tasks ranging from object classification, part segmentation, to 3D feature matching and label alignment. Results show that, on the dataset with randomly rotated point clouds, SPRIN demonstrates better performance than state-of-the-art methods without any data augmentation. We also provide thorough theoretical proof and analysis for point-wise rotation invariance achieved by our methods.

Recommended citation: You, Y., Lou, Y., Shi, R., Liu, Q., Tai, Y. W., Ma, L., ... & Lu, C. (2021). PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features. arXiv preprint arXiv:2102.12093.