Crab: A Unified Audio-Visual Scene Understanding Model with Explicit Cooperation

πŸ“… 2025-03-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses performance interference in audio-visual multimodal multitask learning caused by modality heterogeneity and task coupling, proposing the first explicit cross-task collaboration framework. Methodologically: (1) we introduce AV-UIE, the first audio-visual unified instruction-tuning dataset with chain-of-reasoning annotations; (2) we design an interaction-aware multi-head LoRA architecture, where each attention head independently models audio-visual semantics while enabling explicit inter-task interaction; (3) we integrate instruction tuning, joint multitask optimization, and data annotation augmentation. Experiments demonstrate that our model consistently outperforms existing unified models across temporal localization, spatial localization, spatiotemporal reasoning, and pixel-level understanding tasksβ€”and matches or exceeds task-specific models on several benchmarks. The code and the AV-UIE dataset are publicly released.

Technology Category

Application Category

πŸ“ Abstract
In recent years, numerous tasks have been proposed to encourage model to develop specified capability in understanding audio-visual scene, primarily categorized into temporal localization, spatial localization, spatio-temporal reasoning, and pixel-level understanding. Instead, human possesses a unified understanding ability for diversified tasks. Therefore, designing an audio-visual model with general capability to unify these tasks is of great value. However, simply joint training for all tasks can lead to interference due to the heterogeneity of audiovisual data and complex relationship among tasks. We argue that this problem can be solved through explicit cooperation among tasks. To achieve this goal, we propose a unified learning method which achieves explicit inter-task cooperation from both the perspectives of data and model thoroughly. Specifically, considering the labels of existing datasets are simple words, we carefully refine these datasets and construct an Audio-Visual Unified Instruction-tuning dataset with Explicit reasoning process (AV-UIE), which clarifies the cooperative relationship among tasks. Subsequently, to facilitate concrete cooperation in learning stage, an interaction-aware LoRA structure with multiple LoRA heads is designed to learn different aspects of audiovisual data interaction. By unifying the explicit cooperation across the data and model aspect, our method not only surpasses existing unified audio-visual model on multiple tasks, but also outperforms most specialized models for certain tasks. Furthermore, we also visualize the process of explicit cooperation and surprisingly find that each LoRA head has certain audio-visual understanding ability. Code and dataset: https://github.com/GeWu-Lab/Crab
Problem

Research questions and friction points this paper is trying to address.

Develop a unified audio-visual model for diverse scene understanding tasks.
Address task interference in joint training due to data heterogeneity.
Enhance model performance through explicit inter-task cooperation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified learning method for audio-visual tasks
Audio-Visual Unified Instruction-tuning dataset (AV-UIE)
Interaction-aware LoRA structure with multiple heads
πŸ”Ž Similar Papers
No similar papers found.
H
Henghui Du
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing
G
Guangyao Li
Department of Computer Science and Technology, Tsinghua University, Beijing, China
C
Chang Zhou
AI Technology Center, Online Video Business Unit, Tencent PCG
Chunjie Zhang
Chunjie Zhang
Beijing Jiaotong University
multimediacomputer vision
A
Alan Zhao
AI Technology Center, Online Video Business Unit, Tencent PCG
D
Di Hu
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing