π€ AI Summary
Current vision-language models (VLMs) exhibit limited performance on cross-view point-level correspondence (CVPC), struggling to precisely localize affordance regionsβa key bottleneck for fine-grained embodied interaction. To address this, we introduce CVPC as a novel task and propose CrossPoint-Bench, a hierarchical benchmark with rigorous evaluation protocols. We further present CrossPoint-378K, the first large-scale, affordance-oriented cross-view point correspondence dataset, comprising 378K annotated correspondences across diverse object views and interaction contexts. Leveraging this dataset, we design CroPond, a model integrating perception, geometric reasoning, and explicit correspondence learning modules. Experiments demonstrate that CroPond achieves a 39.7% absolute accuracy gain over Gemini-2.5-Pro on CrossPoint-Bench, substantially narrowing the gap with human performance. This work establishes a new paradigm, benchmark, and model architecture for advancing spatial understanding and cross-view alignment in VLMs.
π Abstract
Cross-view correspondence is a fundamental capability for spatial understanding and embodied AI. However, it is still far from being realized in Vision-Language Models (VLMs), especially in achieving precise point-level correspondence, which is crucial for precise affordance interaction. So we propose the Cross-View Point Correspondence (CVPC) task and CrossPoint-Bench, a comprehensive benchmark with hierarchical design, inspired by the human cognitive process of "perceive", "reason", and "correspond". Our evaluation shows the state-of-the-art models (e.g., Gemini-2.5-Pro) still fall far behind humans, with a gap of over 54.65% in overall accuracy, exposing a challenge in transitioning from coarse-grained judgement to fine-grained coordinate prediction. To address this problem, we construct CrossPoint-378K, a dataset with 378K question-answering pairs across 900 scenes, focused on actionable affordance regions that better reflect real-world manipulation and interaction scenarios. Furthermore, we propose CroPond that trained on the CrossPoint-378K dataset. Our CroPond achieves state-of-the-art performance on CrossPoint-Bench, surpassing Gemini-2.5-Pro by 39.7% accuracy, which offers a foundation for advancing future work on cross-view correspondence. The benchmark, dataset, and model are publicly available at https://github.com/WangYipu2002/CrossPoint.