Concepts in Motion: Temporal Bottlenecks for Interpretable Video Classification

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Concept bottleneck models (CBMs) struggle to capture temporal dependencies in video data. To address this, we propose MoTIF—the first interpretable Transformer framework tailored for video classification. MoTIF explicitly models semantic concepts (e.g., action primitives) as time-evolving variables and jointly learns dynamic representations of objects, attributes, and high-level concepts via a concept bottleneck mechanism. It enables global concept importance scoring, local temporal window-based concept–prediction relevance analysis, and trajectory tracking of concepts over time. MoTIF handles videos of arbitrary length while preserving competitive classification accuracy and delivering fine-grained, multi-granular concept-level explanations. Extensive experiments demonstrate that the concept modeling paradigm is effective for temporal tasks, validating MoTIF’s ability to bridge interpretability and representational power in video understanding.

Technology Category

Application Category

📝 Abstract
Conceptual models such as Concept Bottleneck Models (CBMs) have driven substantial progress in improving interpretability for image classification by leveraging human-interpretable concepts. However, extending these models from static images to sequences of images, such as video data, introduces a significant challenge due to the temporal dependencies inherent in videos, which are essential for capturing actions and events. In this work, we introduce MoTIF (Moving Temporal Interpretable Framework), an architectural design inspired by a transformer that adapts the concept bottleneck framework for video classification and handles sequences of arbitrary length. Within the video domain, concepts refer to semantic entities such as objects, attributes, or higher-level components (e.g., 'bow', 'mount', 'shoot') that reoccur across time - forming motifs collectively describing and explaining actions. Our design explicitly enables three complementary perspectives: global concept importance across the entire video, local concept relevance within specific windows, and temporal dependencies of a concept over time. Our results demonstrate that the concept-based modeling paradigm can be effectively transferred to video data, enabling a better understanding of concept contributions in temporal contexts while maintaining competitive performance. Code available at github.com/patrick-knab/MoTIF.
Problem

Research questions and friction points this paper is trying to address.

Extending interpretable concept bottleneck models from static images to video data
Capturing temporal dependencies in videos for action and event classification
Enabling global, local, and temporal concept analysis for video interpretation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based concept bottleneck for videos
Handles temporal dependencies across arbitrary lengths
Analyzes global, local and temporal concept importance
🔎 Similar Papers
No similar papers found.