🤖 AI Summary
To address the semantic-geometric disconnect between high-level task semantics and low-level geometric features in robotic manipulation, this paper proposes a closed-loop spatial-semantic joint reasoning framework. Methodologically, it integrates automatic geometric primitive extraction, fine-tuning of Qwen2.5VL-PA, semantic grounding, and a closed-loop feedback mechanism to enable annotation-free dynamic semantic anchoring. We further introduce the first affordance-aware spatial-semantic joint benchmark, supporting cross-category keypoint and axis detection as well as fine-grained semantic–functional relationship modeling. Experiments demonstrate that our approach achieves performance on par with human-annotated baselines across diverse real-world manipulation tasks, significantly reducing annotation dependency while enhancing robots’ autonomous understanding of object functional properties and task objectives.
📝 Abstract
The fragmentation between high-level task semantics and low-level geometric features remains a persistent challenge in robotic manipulation. While vision-language models (VLMs) have shown promise in generating affordance-aware visual representations, the lack of semantic grounding in canonical spaces and reliance on manual annotations severely limit their ability to capture dynamic semantic-affordance relationships. To address these, we propose Primitive-Aware Semantic Grounding (PASG), a closed-loop framework that introduces: (1) Automatic primitive extraction through geometric feature aggregation, enabling cross-category detection of keypoints and axes; (2) VLM-driven semantic anchoring that dynamically couples geometric primitives with functional affordances and task-relevant description; (3) A spatial-semantic reasoning benchmark and a fine-tuned VLM (Qwen2.5VL-PA). We demonstrate PASG's effectiveness in practical robotic manipulation tasks across diverse scenarios, achieving performance comparable to manual annotations. PASG achieves a finer-grained semantic-affordance understanding of objects, establishing a unified paradigm for bridging geometric primitives with task semantics in robotic manipulation.