🤖 AI Summary
Multimodal large language models (MLLMs) remain limited in fine-grained image understanding—particularly for deformable object keypoint localization. To address this, we propose the first general-purpose keypoint understanding framework, introducing a novel “identify-then-detect” paradigm and structured chain-of-thought reasoning to achieve unified, cross-scene and cross-category keypoint localization. Our method jointly leverages instruction-driven semantic parsing and pixel-level keypoint regression, trained on a large-scale, multi-category dataset comprising over 500K samples. Evaluated on multiple benchmarks, it achieves state-of-the-art performance, significantly improving localization accuracy and generalization under complex occlusions and diverse object appearances. Moreover, it enhances semantic controllability in human–AI collaborative interaction by enabling precise, instruction-guided keypoint interpretation.
📝 Abstract
The emergence of Multimodal Large Language Models (MLLMs) has revolutionized image understanding by bridging textual and visual modalities. However, these models often struggle with capturing fine-grained semantic information, such as the precise identification and analysis of object keypoints. Keypoints, as structure-aware, pixel-level, and compact representations of objects, particularly articulated ones, play a crucial role in applications such as fine-grained image analysis, object retrieval, and behavior recognition. In this paper, we propose KptLLM++, a novel multimodal large language model that specifically designed for generic keypoint comprehension through the integration of diverse input modalities guided by user-defined instructions. By unifying keypoint detection across varied contexts, KptLLM++ establishes itself as an advanced interface, fostering more effective human-AI collaboration. The model is built upon a novel identify-then-detect paradigm, which first interprets keypoint semantics and subsequently localizes their precise positions through a structured chain-of-thought reasoning mechanism. To push the boundaries of performance, we have scaled up the training dataset to over 500K samples, encompassing diverse objects, keypoint categories, image styles, and scenarios with complex occlusions. This extensive scaling enables KptLLM++ to unlock its potential, achieving remarkable accuracy and generalization. Comprehensive experiments on multiple keypoint detection benchmarks demonstrate its state-of-the-art performance, underscoring its potential as a unified solution for fine-grained image understanding and its transformative implications for human-AI interaction.