Universal Skeleton Understanding via Differentiable Rendering and MLLMs

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that multimodal large language models (MLLMs) struggle to directly process non-visual structured data, such as human skeletal sequences, with existing approaches often suffering from information loss or limited generalization. To overcome this, the authors propose SkeletonLLM—a general-purpose framework for skeleton understanding—that leverages DrAction, a differentiable and format-agnostic renderer, to map arbitrary skeleton sequences end-to-end into compact image sequences. This enables MLLMs to interpret skeletal data natively through visual perception. By integrating a synergistic training strategy combining causal reasoning distillation and discriminative fine-tuning, the model achieves significant improvements in structured reasoning and generalization across diverse tasks, including action recognition, caption generation, logical reasoning, and cross-format transfer, thereby offering an effective pathway to extend MLLMs to non-native modalities.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) exhibit strong visual-language reasoning, yet remain confined to their native modalities and cannot directly process structured, non-visual data such as human skeletons. Existing methods either compress skeleton dynamics into lossy feature vectors for text alignment, or quantize motion into discrete tokens that generalize poorly across heterogeneous skeleton formats. We present SkeletonLLM, which achieves universal skeleton understanding by translating arbitrary skeleton sequences into the MLLM's native visual modality. At its core is DrAction, a differentiable, format-agnostic renderer that converts skeletal kinematics into compact image sequences. Because the pipeline is end-to-end differentiable, MLLM gradients can directly guide the rendering to produce task-informative visual tokens. To further enhance reasoning capabilities, we introduce a cooperative training strategy: Causal Reasoning Distillation transfers structured, step-by-step reasoning from a teacher model, while Discriminative Finetuning sharpens decision boundaries between confusable actions. SkeletonLLM demonstrates strong generalization on diverse tasks including recognition, captioning, reasoning, and cross-format transfer -- suggesting a viable path for applying MLLMs to non-native modalities. Code will be released upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

skeleton understanding
multimodal large language models
non-visual data
format heterogeneity
structured data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable Rendering
Multimodal Large Language Models
Skeleton Understanding
Causal Reasoning Distillation
Format-Agnostic Representation
🔎 Similar Papers
No similar papers found.
Ziyi Wang
Ziyi Wang
University of Electronic Science and Technology of China
CVMLLMLLM
P
Peiming Li
School of Electronics Engineering and Computer Science, Peking University, Beijing, China
Xinshun Wang
Xinshun Wang
Peking University
human perception
Y
Yang Tang
Tencent, Shenzhen, Guangdong, China
Kai-Kuang Ma
Kai-Kuang Ma
Nanyang Technological University
Video CodingVideo ProcessingVideo AnalyticsImage ProcessingMotion Estimation
M
Mengyuan Liu
School of Electronics Engineering and Computer Science, Peking University, Beijing, China