Imagine How To Change: Explicit Procedure Modeling for Change Captioning

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes ProCap, a novel framework that reframes change captioning as dynamic process reasoning, addressing the limitation of existing methods that rely solely on static image pairs and thus fail to capture the "how" of change. ProCap explicitly models the temporal evolution of change through a two-stage design, introducing learnable process queries to infer intermediate states without requiring ground-truth intermediate frames. The framework integrates automatically generated keyframes, mask reconstruction pretraining, and an end-to-end encoder-decoder architecture to enhance temporal understanding and descriptive accuracy. Evaluated on three standard benchmarks, ProCap significantly outperforms current state-of-the-art methods, demonstrating the effectiveness of explicitly modeling change as a dynamic process.

Technology Category

Application Category

📝 Abstract
Change captioning generates descriptions that explicitly describe the differences between two visually similar images. Existing methods operate on static image pairs, thus ignoring the rich temporal dynamics of the change procedure, which is the key to understand not only what has changed but also how it occurs. We introduce ProCap, a novel framework that reformulates change modeling from static image comparison to dynamic procedure modeling. ProCap features a two-stage design: The first stage trains a procedure encoder to learn the change procedure from a sparse set of keyframes. These keyframes are obtained by automatically generating intermediate frames to make the implicit procedural dynamics explicit and then sampling them to mitigate redundancy. Then the encoder learns to capture the latent dynamics of these keyframes via a caption-conditioned, masked reconstruction task. The second stage integrates this trained encoder within an encoder-decoder model for captioning. Instead of relying on explicit frames from the previous stage -- a process incurring computational overhead and sensitivity to visual noise -- we introduce learnable procedure queries to prompt the encoder for inferring the latent procedure representation, which the decoder then translates into text. The entire model is then trained end-to-end with a captioning loss, ensuring the encoder's output is both temporally coherent and captioning-aligned. Experiments on three datasets demonstrate the effectiveness of ProCap. Code and pre-trained models are available at https://github.com/BlueberryOreo/ProCap
Problem

Research questions and friction points this paper is trying to address.

change captioning
temporal dynamics
procedure modeling
image comparison
visual change
Innovation

Methods, ideas, or system contributions that make the work stand out.

change captioning
procedure modeling
temporal dynamics
learnable queries
masked reconstruction
🔎 Similar Papers
No similar papers found.
J
Jiayang Sun
School of Computer Science and Technology, Soochow University, Jiangsu, China
Z
Zixin Guo
Department of Computer Science, Aalto University, Espoo, Finland
M
Min Cao
School of Computer Science and Technology, Soochow University, Jiangsu, China
Guibo Zhu
Guibo Zhu
Institute of Automation, Chinese Academy of Sciecnes
Artificial IntelligenceComputer VisionMachine Learning
Jorma Laaksonen
Jorma Laaksonen
Aalto University
Pattern recognitionComputer visionMachine LearningNeural networks