🤖 AI Summary
This work addresses the performance gap between large audio language models (LALMs) and their text-based counterparts, which often arises from a disconnection between acoustic and semantic feature spaces. To bridge this gap, the authors propose the CORD framework, which employs online cross-modal self-distillation within a unified model architecture, leveraging the textual modality as an internal teacher to align audio- and text-conditioned reasoning processes. The approach innovatively integrates policy-based reverse KL divergence weighting, an importance-aware mechanism, Group Relative Policy Optimization (GRPO), and multi-granularity alignment to jointly optimize token-level and sequence-level representations. Remarkably, using only 80,000 synthetically generated samples, CORD achieves significant improvements in audio-conditioned reasoning across multiple benchmarks, effectively narrowing the performance disparity between modalities.
📝 Abstract
Large Audio Language Models (LALMs) have garnered significant research interest. Despite being built upon text-based large language models (LLMs), LALMs frequently exhibit a degradation in knowledge and reasoning capabilities. We hypothesize that this limitation stems from the failure of current training paradigms to effectively bridge the acoustic-semantic gap within the feature representation space. To address this challenge, we propose CORD, a unified alignment framework that performs online cross-modal self-distillation. Specifically, it aligns audio-conditioned reasoning with its text-conditioned counterpart within a unified model. Leveraging the text modality as an internal teacher, CORD performs multi-granularity alignment throughout the audio rollout process. At the token level, it employs on-policy reverse KL divergence with importance-aware weighting to prioritize early and semantically critical tokens. At the sequence level, CORD introduces a judge-based global reward to optimize complete reasoning trajectories via Group Relative Policy Optimization (GRPO). Empirical results across multiple benchmarks demonstrate that CORD consistently enhances audio-conditioned reasoning and substantially bridges the audio-text performance gap with only 80k synthetic training samples, validating the efficacy and data efficiency of our on-policy, multi-level cross-modal alignment approach.