PCBEAR: Pose Concept Bottleneck for Explainable Action Recognition

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing eXplainable AI (XAI) methods for video action recognition lack sufficient interpretability, particularly in modeling motion dynamics and temporal dependencies. Method: This paper introduces the first motion-aware concept bottleneck model grounded in human pose sequences. Leveraging skeletal keypoint representations, it enables unsupervised, dual-granularity concept discovery—capturing both intra-frame static pose configurations and inter-frame motion patterns—and integrates these concepts into a differentiable, concept-guided classification architecture. Contribution/Results: By embedding interpretability directly into the model structure, our approach explicitly decouples spatial configuration from temporal dynamics, enabling concept-level attribution and human-in-the-loop intervention. Evaluated on KTH, Penn-Action, and HAA500, it achieves state-of-the-art accuracy while simultaneously generating precise action heatmaps and auditable, traceable reasoning paths—thereby reconciling high predictive performance with strong model transparency.

Technology Category

Application Category

📝 Abstract
Human action recognition (HAR) has achieved impressive results with deep learning models, but their decision-making process remains opaque due to their black-box nature. Ensuring interpretability is crucial, especially for real-world applications requiring transparency and accountability. Existing video XAI methods primarily rely on feature attribution or static textual concepts, both of which struggle to capture motion dynamics and temporal dependencies essential for action understanding. To address these challenges, we propose Pose Concept Bottleneck for Explainable Action Recognition (PCBEAR), a novel concept bottleneck framework that introduces human pose sequences as motion-aware, structured concepts for video action recognition. Unlike methods based on pixel-level features or static textual descriptions, PCBEAR leverages human skeleton poses, which focus solely on body movements, providing robust and interpretable explanations of motion dynamics. We define two types of pose-based concepts: static pose concepts for spatial configurations at individual frames, and dynamic pose concepts for motion patterns across multiple frames. To construct these concepts, PCBEAR applies clustering to video pose sequences, allowing for automatic discovery of meaningful concepts without manual annotation. We validate PCBEAR on KTH, Penn-Action, and HAA500, showing that it achieves high classification performance while offering interpretable, motion-driven explanations. Our method provides both strong predictive performance and human-understandable insights into the model's reasoning process, enabling test-time interventions for debugging and improving model behavior.
Problem

Research questions and friction points this paper is trying to address.

Improves interpretability in human action recognition models
Addresses limitations of static concepts in capturing motion dynamics
Introduces pose-based concepts for transparent action classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses human pose sequences as motion-aware concepts
Applies clustering to auto-discover pose-based concepts
Combines static and dynamic pose concepts for recognition
🔎 Similar Papers
No similar papers found.