🤖 AI Summary
Existing video captioning benchmarks and models lack explicit causal-temporal narrative capability, failing to model character-driven causal dependencies and sequential evolution among events. To address this, we introduce CTN—the first benchmark explicitly designed for causal-chain modeling in video captioning—and propose the Causal Effect Network (CEN), a novel architecture featuring a dual-stream encoder that disentangles cause and effect dynamics, integrated with causal-aware feature alignment and joint decoding. CTN annotations are generated via large language models with few-shot prompting. On MSVD-CTN and MSRVTT-CTN, CEN achieves CIDEr scores of 17.88 and 17.44, respectively—substantially surpassing state-of-the-art methods. Moreover, it demonstrates strong cross-dataset generalization and, for the first time, enables fine-grained causal-temporal narrative generation.
📝 Abstract
Existing video captioning benchmarks and models lack causal-temporal narrative, which is sequences of events linked through cause and effect, unfolding over time and driven by characters or agents. This lack of narrative restricts models' ability to generate text descriptions that capture the causal and temporal dynamics inherent in video content. To address this gap, we propose NarrativeBridge, an approach comprising of: (1) a novel Causal-Temporal Narrative (CTN) captions benchmark generated using a large language model and few-shot prompting, explicitly encoding cause-effect temporal relationships in video descriptions; and (2) a Cause-Effect Network (CEN) with separate encoders for capturing cause and effect dynamics, enabling effective learning and generation of captions with causal-temporal narrative. Extensive experiments demonstrate that CEN significantly outperforms state-of-the-art models in articulating the causal and temporal aspects of video content: 17.88 and 17.44 CIDEr on the MSVD-CTN and MSRVTT-CTN datasets, respectively. Cross-dataset evaluations further showcase CEN's strong generalization capabilities. The proposed framework understands and generates nuanced text descriptions with intricate causal-temporal narrative structures present in videos, addressing a critical limitation in video captioning. For project details, visit https://narrativebridge.github.io/.