🤖 AI Summary
Existing 3D shape descriptors lack chirality awareness, rendering them incapable of distinguishing mirror-symmetric structures (e.g., left/right hands), thereby hindering fine-grained analysis of point clouds and meshes. To address this, we propose the first unsupervised chiral disentanglement framework for 3D feature learning—marking the inaugural adaptation of 2D foundation models (e.g., CLIP) to 3D geometric representation. Our method explicitly disentangles and injects chirality information via geometric transformation modeling and contrastive learning, requiring no manual annotations. Within the Diff3F architecture, it integrates multi-view chiral cues to generate vertex-level features with discriminative left/right capability. Extensive evaluation on ShapeNet, PartNet, and other benchmarks demonstrates substantial improvements: +12.7% in left/right separation accuracy, +9.3% in cross-mirror shape matching precision, and +4.1% in part segmentation mIoU—effectively bridging the chirality blindness inherent in conventional descriptors.
📝 Abstract
Chirality information (i.e. information that allows distinguishing left from right) is ubiquitous for various data modes in computer vision, including images, videos, point clouds, and meshes. While chirality has been extensively studied in the image domain, its exploration in shape analysis (such as point clouds and meshes) remains underdeveloped. Although many shape vertex descriptors have shown appealing properties (e.g. robustness to rigid-body transformations), they are often not able to disambiguate between left and right symmetric parts. Considering the ubiquity of chirality information in different shape analysis problems and the lack of chirality-aware features within current shape descriptors, developing a chirality feature extractor becomes necessary and urgent. Based on the recent Diff3F framework, we propose an unsupervised chirality feature extraction pipeline to decorate shape vertices with chirality-aware information, extracted from 2D foundation models. We evaluated the extracted chirality features through quantitative and qualitative experiments across diverse datasets. Results from downstream tasks including left-right disentanglement, shape matching, and part segmentation demonstrate their effectiveness and practical utility. Project page: https://wei-kang-wang.github.io/chirality/