CORD: Bridging the Audio-Text Reasoning Gap via Weighted On-policy Cross-modal Distillation

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance gap between large audio language models (LALMs) and their text-based counterparts, which often arises from a disconnection between acoustic and semantic feature spaces. To bridge this gap, the authors propose the CORD framework, which employs online cross-modal self-distillation within a unified model architecture, leveraging the textual modality as an internal teacher to align audio- and text-conditioned reasoning processes. The approach innovatively integrates policy-based reverse KL divergence weighting, an importance-aware mechanism, Group Relative Policy Optimization (GRPO), and multi-granularity alignment to jointly optimize token-level and sequence-level representations. Remarkably, using only 80,000 synthetically generated samples, CORD achieves significant improvements in audio-conditioned reasoning across multiple benchmarks, effectively narrowing the performance disparity between modalities.

Technology Category

Application Category

📝 Abstract
Large Audio Language Models (LALMs) have garnered significant research interest. Despite being built upon text-based large language models (LLMs), LALMs frequently exhibit a degradation in knowledge and reasoning capabilities. We hypothesize that this limitation stems from the failure of current training paradigms to effectively bridge the acoustic-semantic gap within the feature representation space. To address this challenge, we propose CORD, a unified alignment framework that performs online cross-modal self-distillation. Specifically, it aligns audio-conditioned reasoning with its text-conditioned counterpart within a unified model. Leveraging the text modality as an internal teacher, CORD performs multi-granularity alignment throughout the audio rollout process. At the token level, it employs on-policy reverse KL divergence with importance-aware weighting to prioritize early and semantically critical tokens. At the sequence level, CORD introduces a judge-based global reward to optimize complete reasoning trajectories via Group Relative Policy Optimization (GRPO). Empirical results across multiple benchmarks demonstrate that CORD consistently enhances audio-conditioned reasoning and substantially bridges the audio-text performance gap with only 80k synthetic training samples, validating the efficacy and data efficiency of our on-policy, multi-level cross-modal alignment approach.
Problem

Research questions and friction points this paper is trying to address.

Audio-Language Models
Reasoning Gap
Cross-modal Alignment
Acoustic-Semantic Gap
Multimodal Reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-modal distillation
on-policy alignment
audio-language reasoning
multi-granularity alignment
Group Relative Policy Optimization
🔎 Similar Papers
No similar papers found.
Jing Hu
Jing Hu
Associate professor, School of Computer Science and Engineering, Xi'an University of Technology
hyperspectral image processing
D
Danxiang Zhu
ERNIE Team, Baidu
X
Xianlong Luo
ERNIE Team, Baidu
D
Dan Zhang
ERNIE Team, Baidu
S
Shuwei He
ERNIE Team, Baidu; College of Computer Science, Inner Mongolia University
Y
Yishu Lei
ERNIE Team, Baidu
Haitao Zheng
Haitao Zheng
Neubauer Professor of Computer Science, University of Chicago
Mobile ComputingSecurity and Privacy
Shikun Feng
Shikun Feng
Baidu
nlp
J
Jingzhou He
ERNIE Team, Baidu
Yu Sun
Yu Sun
Baidu
Natural Language ProcessingDeep Learning
H
Hua Wu
ERNIE Team, Baidu
Haifeng Wang
Haifeng Wang
Baidu
NLPMTSearchSpeechData Mining