SkeletonAgent: An Agentic Interaction Framework for Skeleton-based Action Recognition

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In skeleton-based action recognition, isolated use of large language models (LLMs) without feedback hinders discrimination of semantically similar actions. Method: We propose a dual-agent collaborative optimization framework: a Questioner agent identifies category-level confusions, while a Selector agent generates joint-level semantic constraints, establishing a closed-loop feedback loop between the recognition model and the LLM. Our approach integrates LLM-derived semantic priors with skeleton temporal modeling via prompt engineering, fine-grained cross-modal feature alignment, and dynamic semantic guidance. Results: The method achieves significant improvements over state-of-the-art methods on five benchmarks—including NTU RGB+D—particularly for fine-grained action distinctions. It is the first to enable interpretable, feedback-driven, and optimization-aware deep collaboration between LLMs and skeleton-based recognition models.

Technology Category

Application Category

📝 Abstract
Recent advances in skeleton-based action recognition increasingly leverage semantic priors from Large Language Models (LLMs) to enrich skeletal representations. However, the LLM is typically queried in isolation from the recognition model and receives no performance feedback. As a result, it often fails to deliver the targeted discriminative cues critical to distinguish similar actions. To overcome these limitations, we propose SkeletonAgent, a novel framework that bridges the recognition model and the LLM through two cooperative agents, i.e., Questioner and Selector. Specifically, the Questioner identifies the most frequently confused classes and supplies them to the LLM as context for more targeted guidance. Conversely, the Selector parses the LLM's response to extract precise joint-level constraints and feeds them back to the recognizer, enabling finer-grained cross-modal alignment. Comprehensive evaluations on five benchmarks, including NTU RGB+D, NTU RGB+D 120, Kinetics-Skeleton, FineGYM, and UAV-Human, demonstrate that SkeletonAgent consistently outperforms state-of-the-art benchmark methods. The code is available at https://github.com/firework8/SkeletonAgent.
Problem

Research questions and friction points this paper is trying to address.

Improves discriminative cues for similar actions
Integrates LLM feedback into recognition model
Enhances cross-modal alignment for skeleton recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces two cooperative agents: Questioner and Selector
Questioner identifies confused classes to guide LLM context
Selector extracts joint-level constraints for cross-modal alignment
🔎 Similar Papers
No similar papers found.
Hongda Liu
Hongda Liu
Sun Yat-sen University
Computer VisionLow-level VisionImage RestorationStyle Transfer
Y
Yunfan Liu
University of Chinese Academy of Sciences
C
Changlu Wang
NLPR, Institute of Automation, Chinese Academy of Sciences
Y
Yunlong Wang
NLPR, Institute of Automation, Chinese Academy of Sciences
Zhenan Sun
Zhenan Sun
Institute of Automation, Chinese Academy of Sciences
BiometricsPattern RecognitionComputer Vision