🤖 AI Summary
This work addresses the challenges in surgical video question answering, where linguistic variations introduce bias and existing parameter-efficient fine-tuning methods struggle to effectively model sparse temporal evidence across frames. To this end, we propose TemporalDoRA, a novel video-oriented parameter-efficient fine-tuning approach that, for the first time, integrates a lightweight temporal multi-head attention mechanism into the low-rank adaptation (LoRA) architecture. Combined with a selective weight decomposition strategy, TemporalDoRA updates only the low-rank branches to enable temporally aware parameter adaptation while keeping the backbone frozen. This design simultaneously preserves temporal consistency, robustness, and parameter efficiency. Experiments on the REAL-Colon-VQA and EndoVis18-VQA datasets demonstrate significant improvements in answer accuracy for non-template questions, and ablation studies confirm the critical role of the proposed temporal mixing mechanism in performance gains.
📝 Abstract
Surgical Video Question Answering (VideoQA) requires accurate temporal grounding while remaining robust to natural variation in how clinicians phrase questions, where linguistic bias can arise. Standard Parameter Efficient Fine Tuning (PEFT) methods adapt pretrained projections without explicitly modeling frame-to-frame interactions within the adaptation pathway, limiting their ability to exploit sparse temporal evidence. We introduce TemporalDoRA, a video-specific PEFT formulation that extends Weight-Decomposed Low-Rank Adaptation by (i) inserting lightweight temporal Multi-Head Attention (MHA) inside the low-rank bottleneck of the vision encoder and (ii) selectively applying weight decomposition only to the trainable low-rank branch rather than the full adapted weight. This design enables temporally-aware updates while preserving a frozen backbone and stable scaling. By mixing information across frames within the adaptation subspace, TemporalDoRA steers updates toward temporally consistent visual cues and improves robustness with minimal parameter overhead. To benchmark this setting, we present REAL-Colon-VQA, a colonoscopy VideoQA dataset with 6,424 clip--question pairs, including paired rephrased Out-of-Template questions to evaluate sensitivity to linguistic variation. TemporalDoRA improves Out-of-Template performance, and ablation studies confirm that temporal mixing inside the low-rank branch is the primary driver of these gains. We also validate on EndoVis18-VQA adapted to short clips and observe consistent improvements on the Out-of-Template split. Code and dataset available at~\href{https://anonymous.4open.science/r/TemporalDoRA-BFC8/}{Anonymous GitHub}.