Mettle: Meta-Token Learning for Memory-Efficient Audio-Visual Adaptation

📅 2025-06-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low adaptation efficiency and high memory/computational overhead of large-scale pretrained Transformers in audio-visual downstream tasks, this paper proposes MetaToken, a lightweight meta-token learning framework. Methodologically, it introduces (1) Layer-Centric Distillation (LCD), which parallelly distills audio-visual features at each Transformer layer to generate compact, transferable meta-tokens; and (2) Meta-Token Injection (MTI), which dynamically injects these meta-tokens into early layers to guide cross-modal feature alignment and task-specific adaptation. By preserving pretrained knowledge while drastically improving parameter efficiency, MetaToken achieves competitive accuracy on both classification and fine-grained segmentation tasks. Experiments demonstrate up to 52% memory reduction and 49% training time savings compared to baseline methods, with consistent performance gains across diverse audio-visual benchmarks. The framework thus validates strong effectiveness, broad generalizability, and deployment friendliness for resource-constrained multimodal learning scenarios.

Technology Category

Application Category

📝 Abstract
We present extbf{Met}a- extbf{T}oken extbf{Le}arning (Mettle), a simple and memory-efficient method for adapting large-scale pretrained transformer models to downstream audio-visual tasks. Instead of sequentially modifying the output feature distribution of the transformer backbone, Mettle utilizes a lightweight extit{Layer-Centric Distillation (LCD)} module to distill in parallel the intact audio or visual features embedded by each transformer layer into compact meta-tokens. This distillation process considers both pretrained knowledge preservation and task-specific adaptation. The obtained meta-tokens can be directly applied to classification tasks, such as audio-visual event localization and audio-visual video parsing. To further support fine-grained segmentation tasks, such as audio-visual segmentation, we introduce a extit{Meta-Token Injection (MTI)} module, which utilizes the audio and visual meta-tokens distilled from the top transformer layer to guide feature adaptation in earlier layers. Extensive experiments on multiple audiovisual benchmarks demonstrate that our method significantly reduces memory usage and training time while maintaining parameter efficiency and competitive accuracy.
Problem

Research questions and friction points this paper is trying to address.

Memory-efficient adaptation of large pretrained transformers
Distilling audio-visual features into compact meta-tokens
Enhancing fine-grained segmentation with meta-token guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-Centric Distillation for compact meta-tokens
Meta-Token Injection for fine-grained segmentation
Memory-efficient adaptation of transformer models
🔎 Similar Papers
No similar papers found.
J
Jinxing Zhou
MBZUAI
Zhihui Li
Zhihui Li
School of Information Science and Technology, University of Science and Technology of China
Artificial IntelligenceMachine LearningMultimedia
Y
Yongqiang Yu
MBZUAI
Y
Yanghao Zhou
National University of Singapore
Ruohao Guo
Ruohao Guo
Peking University
Multi-Modal LearningComputer VisionVideo Generation
G
Guangyao Li
Tsinghua University
Y
Yuxin Mao
OpenNLP Lab
Mingfei Han
Mingfei Han
MBZUAI; University of Technology Sydney; Bytedance Seed; MMLab, SIAT
Object RecognitionVideo UnderstandingVision Language ModelsRobotics
X
Xiaojun Chang
MBZUAI, University of Science and Technology of China
M
Meng Wang
Hefei University of Technology